Alert button
Picture for Saksham Singhal

Saksham Singhal

Alert button

Language Is Not All You Need: Aligning Perception with Language Models

Add code
Bookmark button
Alert button
Mar 01, 2023
Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Barun Patra, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, Furu Wei

Figure 1 for Language Is Not All You Need: Aligning Perception with Language Models
Figure 2 for Language Is Not All You Need: Aligning Perception with Language Models
Figure 3 for Language Is Not All You Need: Aligning Perception with Language Models
Figure 4 for Language Is Not All You Need: Aligning Perception with Language Models
Viaarxiv icon

Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning

Add code
Bookmark button
Alert button
Oct 26, 2022
Barun Patra, Saksham Singhal, Shaohan Huang, Zewen Chi, Li Dong, Furu Wei, Vishrav Chaudhary, Xia Song

Figure 1 for Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning
Figure 2 for Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning
Figure 3 for Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning
Figure 4 for Beyond English-Centric Bitexts for Better Multilingual Language Representation Learning
Viaarxiv icon

Foundation Transformers

Add code
Bookmark button
Alert button
Oct 19, 2022
Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, Furu Wei

Figure 1 for Foundation Transformers
Figure 2 for Foundation Transformers
Figure 3 for Foundation Transformers
Figure 4 for Foundation Transformers
Viaarxiv icon

Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks

Add code
Bookmark button
Alert button
Aug 31, 2022
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei

Figure 1 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 2 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 3 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 4 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Viaarxiv icon

On the Representation Collapse of Sparse Mixture of Experts

Add code
Bookmark button
Alert button
Apr 20, 2022
Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei

Figure 1 for On the Representation Collapse of Sparse Mixture of Experts
Figure 2 for On the Representation Collapse of Sparse Mixture of Experts
Figure 3 for On the Representation Collapse of Sparse Mixture of Experts
Figure 4 for On the Representation Collapse of Sparse Mixture of Experts
Viaarxiv icon

Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task

Add code
Bookmark button
Alert button
Nov 03, 2021
Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei

Figure 1 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 2 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 3 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 4 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Viaarxiv icon

Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training

Add code
Bookmark button
Alert button
Sep 15, 2021
Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei

Figure 1 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 2 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 3 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 4 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Viaarxiv icon

XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

Add code
Bookmark button
Alert button
Jun 30, 2021
Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei

Figure 1 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 2 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 3 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 4 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Viaarxiv icon