Alert button
Picture for Furu Wei

Furu Wei

Alert button

DiT: Self-supervised Pre-training for Document Image Transformer

Add code
Bookmark button
Alert button
Apr 12, 2022
Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei

Figure 1 for DiT: Self-supervised Pre-training for Document Image Transformer
Figure 2 for DiT: Self-supervised Pre-training for Document Image Transformer
Figure 3 for DiT: Self-supervised Pre-training for Document Image Transformer
Figure 4 for DiT: Self-supervised Pre-training for Document Image Transformer
Viaarxiv icon

Speech Pre-training with Acoustic Piece

Add code
Bookmark button
Alert button
Apr 07, 2022
Shuo Ren, Shujie Liu, Yu Wu, Long Zhou, Furu Wei

Figure 1 for Speech Pre-training with Acoustic Piece
Figure 2 for Speech Pre-training with Acoustic Piece
Figure 3 for Speech Pre-training with Acoustic Piece
Figure 4 for Speech Pre-training with Acoustic Piece
Viaarxiv icon

Lossless Speedup of Autoregressive Translation with Generalized Aggressive Decoding

Add code
Bookmark button
Alert button
Apr 02, 2022
Heming Xia, Tao Ge, Furu Wei, Zhifang Sui

Viaarxiv icon

Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data

Add code
Bookmark button
Alert button
Mar 31, 2022
Junyi Ao, Ziqiang Zhang, Long Zhou, Shujie Liu, Haizhou Li, Tom Ko, Lirong Dai, Jinyu Li, Yao Qian, Furu Wei

Figure 1 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 2 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 3 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Figure 4 for Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired Speech Data
Viaarxiv icon

CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment

Add code
Bookmark button
Alert button
Mar 14, 2022
Haoyu Song, Li Dong, Wei-Nan Zhang, Ting Liu, Furu Wei

Figure 1 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 2 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 3 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Figure 4 for CLIP Models are Few-shot Learners: Empirical Studies on VQA and Visual Entailment
Viaarxiv icon

DeepNet: Scaling Transformers to 1,000 Layers

Add code
Bookmark button
Alert button
Mar 01, 2022
Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, Furu Wei

Figure 1 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 2 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 3 for DeepNet: Scaling Transformers to 1,000 Layers
Figure 4 for DeepNet: Scaling Transformers to 1,000 Layers
Viaarxiv icon

Controllable Natural Language Generation with Contrastive Prefixes

Add code
Bookmark button
Alert button
Feb 27, 2022
Jing Qian, Li Dong, Yelong Shen, Furu Wei, Weizhu Chen

Figure 1 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 2 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 3 for Controllable Natural Language Generation with Contrastive Prefixes
Figure 4 for Controllable Natural Language Generation with Contrastive Prefixes
Viaarxiv icon

Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt

Add code
Bookmark button
Alert button
Feb 23, 2022
Lianzhe Huang, Shuming Ma, Dongdong Zhang, Furu Wei, Houfeng Wang

Figure 1 for Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt
Figure 2 for Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt
Figure 3 for Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt
Figure 4 for Zero-shot Cross-lingual Transfer of Prompt-based Tuning with a Unified Multilingual Prompt
Viaarxiv icon

A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models

Add code
Bookmark button
Alert button
Feb 17, 2022
Da Yin, Li Dong, Hao Cheng, Xiaodong Liu, Kai-Wei Chang, Furu Wei, Jianfeng Gao

Figure 1 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 2 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 3 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Figure 4 for A Survey of Knowledge-Intensive NLP with Pre-Trained Language Models
Viaarxiv icon