Alert button
Picture for Li Dong

Li Dong

Alert button

A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models

Add code
Bookmark button
Alert button
Feb 17, 2022
Da Yin, Li Dong, Hao Cheng, Xiaodong Liu, Kai-Wei Chang, Furu Wei, Jianfeng Gao

Figure 1 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 2 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 3 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 4 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Viaarxiv icon

AdaPrompt: Adaptive Model Training for Prompt-based NLP

Add code
Bookmark button
Alert button
Feb 10, 2022
Yulong Chen, Yang Liu, Li Dong, Shuohang Wang, Chenguang Zhu, Michael Zeng, Yue Zhang

Figure 1 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 2 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 3 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 4 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Viaarxiv icon

Corrupted Image Modeling for Self-Supervised Visual Pre-Training

Add code
Bookmark button
Alert button
Feb 07, 2022
Yuxin Fang, Li Dong, Hangbo Bao, Xinggang Wang, Furu Wei

Figure 1 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 2 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 3 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 4 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Viaarxiv icon

Kformer: Knowledge Injection in Transformer Feed-Forward Layers

Add code
Bookmark button
Alert button
Jan 15, 2022
Yunzhi Yao, Shaohan Huang, Ningyu Zhang, Li Dong, Furu Wei, Huajun Chen

Figure 1 for Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Figure 2 for Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Figure 3 for Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Figure 4 for Kformer: Knowledge Injection in Transformer Feed-Forward Layers
Viaarxiv icon

Swin Transformer V2: Scaling Up Capacity and Resolution

Add code
Bookmark button
Alert button
Nov 18, 2021
Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, Furu Wei, Baining Guo

Figure 1 for Swin Transformer V2: Scaling Up Capacity and Resolution
Figure 2 for Swin Transformer V2: Scaling Up Capacity and Resolution
Figure 3 for Swin Transformer V2: Scaling Up Capacity and Resolution
Figure 4 for Swin Transformer V2: Scaling Up Capacity and Resolution
Viaarxiv icon

VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts

Add code
Bookmark button
Alert button
Nov 03, 2021
Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei

Figure 1 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 2 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 3 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 4 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Viaarxiv icon

Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task

Add code
Bookmark button
Alert button
Nov 03, 2021
Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei

Figure 1 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 2 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 3 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 4 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Viaarxiv icon

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Add code
Bookmark button
Alert button
Oct 26, 2021
Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei

Figure 1 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 2 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 3 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 4 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Viaarxiv icon

Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training

Add code
Bookmark button
Alert button
Sep 15, 2021
Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei

Figure 1 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 2 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 3 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Figure 4 for Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training
Viaarxiv icon

XLM-E: Cross-lingual Language Model Pre-training via ELECTRA

Add code
Bookmark button
Alert button
Jun 30, 2021
Zewen Chi, Shaohan Huang, Li Dong, Shuming Ma, Saksham Singhal, Payal Bajaj, Xia Song, Furu Wei

Figure 1 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 2 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 3 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Figure 4 for XLM-E: Cross-lingual Language Model Pre-training via ELECTRA
Viaarxiv icon