Alert button
Picture for Davis Liang

Davis Liang

Alert button

RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training

Dec 07, 2023
Jaehyung Kim, Yuning Mao, Rui Hou, Hanchao Yu, Davis Liang, Pascale Fung, Qifan Wang, Fuli Feng, Lifu Huang, Madian Khabsa

Viaarxiv icon

Co-training and Co-distillation for Quality Improvement and Compression of Language Models

Nov 07, 2023
Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Hongbo Zhang, Sung Ju Hwang, Alexander Min

Viaarxiv icon

The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants

Aug 31, 2023
Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, Madian Khabsa

Figure 1 for The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
Figure 2 for The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
Figure 3 for The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
Figure 4 for The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants
Viaarxiv icon

A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models

May 26, 2023
Hayeon Lee, Rui Hou, Jongpil Kim, Davis Liang, Sung Ju Hwang, Alexander Min

Figure 1 for A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models
Figure 2 for A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models
Figure 3 for A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models
Figure 4 for A Study on Knowledge Distillation from Weak Teacher for Scaling Up Pre-trained Language Models
Viaarxiv icon

XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models

Jan 25, 2023
Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa

Figure 1 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 2 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 3 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Figure 4 for XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models
Viaarxiv icon

Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery

Oct 14, 2022
Ashkan Kazemi, Artem Abzaliev, Naihao Deng, Rui Hou, Davis Liang, Scott A. Hale, Verónica Pérez-Rosas, Rada Mihalcea

Figure 1 for Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery
Figure 2 for Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery
Figure 3 for Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery
Figure 4 for Adaptable Claim Rewriting with Offline Reinforcement Learning for Effective Misinformation Discovery
Viaarxiv icon

Attention-guided Generative Models for Extractive Question Answering

Oct 12, 2021
Peng Xu, Davis Liang, Zhiheng Huang, Bing Xiang

Figure 1 for Attention-guided Generative Models for Extractive Question Answering
Figure 2 for Attention-guided Generative Models for Extractive Question Answering
Figure 3 for Attention-guided Generative Models for Extractive Question Answering
Figure 4 for Attention-guided Generative Models for Extractive Question Answering
Viaarxiv icon

Multiplicative Position-aware Transformer Models for Language Understanding

Sep 27, 2021
Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang

Figure 1 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 2 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 3 for Multiplicative Position-aware Transformer Models for Language Understanding
Figure 4 for Multiplicative Position-aware Transformer Models for Language Understanding
Viaarxiv icon

Decoding and Diversity in Machine Translation

Nov 26, 2020
Nicholas Roberts, Davis Liang, Graham Neubig, Zachary C. Lipton

Figure 1 for Decoding and Diversity in Machine Translation
Figure 2 for Decoding and Diversity in Machine Translation
Figure 3 for Decoding and Diversity in Machine Translation
Figure 4 for Decoding and Diversity in Machine Translation
Viaarxiv icon

Improve Transformer Models with Better Relative Position Embeddings

Sep 28, 2020
Zhiheng Huang, Davis Liang, Peng Xu, Bing Xiang

Figure 1 for Improve Transformer Models with Better Relative Position Embeddings
Figure 2 for Improve Transformer Models with Better Relative Position Embeddings
Figure 3 for Improve Transformer Models with Better Relative Position Embeddings
Figure 4 for Improve Transformer Models with Better Relative Position Embeddings
Viaarxiv icon