Alert button
Picture for Furu Wei

Furu Wei

Alert button

Document AI: Benchmarks, Models and Applications

Add code
Bookmark button
Alert button
Nov 16, 2021
Lei Cui, Yiheng Xu, Tengchao Lv, Furu Wei

Figure 1 for Document AI: Benchmarks, Models and Applications
Figure 2 for Document AI: Benchmarks, Models and Applications
Figure 3 for Document AI: Benchmarks, Models and Applications
Figure 4 for Document AI: Benchmarks, Models and Applications
Viaarxiv icon

VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts

Add code
Bookmark button
Alert button
Nov 03, 2021
Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei

Figure 1 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 2 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 3 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 4 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Viaarxiv icon

Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task

Add code
Bookmark button
Alert button
Nov 03, 2021
Jian Yang, Shuming Ma, Haoyang Huang, Dongdong Zhang, Li Dong, Shaohan Huang, Alexandre Muzio, Saksham Singhal, Hany Hassan Awadalla, Xia Song, Furu Wei

Figure 1 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 2 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 3 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Figure 4 for Multilingual Machine Translation Systems from Microsoft for WMT21 Shared Task
Viaarxiv icon

WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing

Add code
Bookmark button
Alert button
Oct 29, 2021
Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei

Figure 1 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 2 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 3 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Figure 4 for WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
Viaarxiv icon

Separating Long-Form Speech with Group-Wise Permutation Invariant Training

Add code
Bookmark button
Alert button
Oct 27, 2021
Wangyou Zhang, Zhuo Chen, Naoyuki Kanda, Shujie Liu, Jinyu Li, Sefik Emre Eskimez, Takuya Yoshioka, Xiong Xiao, Zhong Meng, Yanmin Qian, Furu Wei

Figure 1 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 2 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 3 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Figure 4 for Separating Long-Form Speech with Group-Wise Permutation Invariant Training
Viaarxiv icon

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Add code
Bookmark button
Alert button
Oct 26, 2021
Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei

Figure 1 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 2 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 3 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 4 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Viaarxiv icon

Improving Non-autoregressive Generation with Mixup Training

Add code
Bookmark button
Alert button
Oct 21, 2021
Ting Jiang, Shaohan Huang, Zihan Zhang, Deqing Wang, Fuzhen Zhuang, Furu Wei, Haizhen Huang, Liangjie Zhang, Qi Zhang

Figure 1 for Improving Non-autoregressive Generation with Mixup Training
Figure 2 for Improving Non-autoregressive Generation with Mixup Training
Figure 3 for Improving Non-autoregressive Generation with Mixup Training
Figure 4 for Improving Non-autoregressive Generation with Mixup Training
Viaarxiv icon

Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation

Add code
Bookmark button
Alert button
Oct 16, 2021
Guanhua Chen, Shuming Ma, Yun Chen, Dongdong Zhang, Jia Pan, Wenping Wang, Furu Wei

Figure 1 for Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
Figure 2 for Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
Figure 3 for Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
Figure 4 for Towards Making the Most of Multilingual Pretraining for Zero-Shot Neural Machine Translation
Viaarxiv icon

MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding

Add code
Bookmark button
Alert button
Oct 16, 2021
Junlong Li, Yiheng Xu, Lei Cui, Furu Wei

Figure 1 for MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding
Figure 2 for MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding
Figure 3 for MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding
Figure 4 for MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding
Viaarxiv icon

SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing

Add code
Bookmark button
Alert button
Oct 14, 2021
Junyi Ao, Rui Wang, Long Zhou, Shujie Liu, Shuo Ren, Yu Wu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei

Figure 1 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 2 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 3 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Figure 4 for SpeechT5: Unified-Modal Encoder-Decoder Pre-training for Spoken Language Processing
Viaarxiv icon