Alert button
Picture for Hangbo Bao

Hangbo Bao

Alert button

A Unified View of Masked Image Modeling

Add code
Bookmark button
Alert button
Oct 19, 2022
Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, Furu Wei

Figure 1 for A Unified View of Masked Image Modeling
Figure 2 for A Unified View of Masked Image Modeling
Figure 3 for A Unified View of Masked Image Modeling
Figure 4 for A Unified View of Masked Image Modeling
Viaarxiv icon

Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks

Add code
Bookmark button
Alert button
Aug 31, 2022
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, Furu Wei

Figure 1 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 2 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 3 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Figure 4 for Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks
Viaarxiv icon

BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers

Add code
Bookmark button
Alert button
Aug 12, 2022
Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, Furu Wei

Figure 1 for BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Figure 2 for BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Figure 3 for BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Figure 4 for BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers
Viaarxiv icon

VL-BEiT: Generative Vision-Language Pretraining

Add code
Bookmark button
Alert button
Jun 02, 2022
Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei

Figure 1 for VL-BEiT: Generative Vision-Language Pretraining
Figure 2 for VL-BEiT: Generative Vision-Language Pretraining
Figure 3 for VL-BEiT: Generative Vision-Language Pretraining
Figure 4 for VL-BEiT: Generative Vision-Language Pretraining
Viaarxiv icon

THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption

Add code
Bookmark button
Alert button
Jun 02, 2022
Tianyu Chen, Hangbo Bao, Shaohan Huang, Li Dong, Binxing Jiao, Daxin Jiang, Haoyi Zhou, Jianxin Li, Furu Wei

Viaarxiv icon

Corrupted Image Modeling for Self-Supervised Visual Pre-Training

Add code
Bookmark button
Alert button
Feb 07, 2022
Yuxin Fang, Li Dong, Hangbo Bao, Xinggang Wang, Furu Wei

Figure 1 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 2 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 3 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Figure 4 for Corrupted Image Modeling for Self-Supervised Visual Pre-Training
Viaarxiv icon

VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts

Add code
Bookmark button
Alert button
Nov 03, 2021
Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei

Figure 1 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 2 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 3 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Figure 4 for VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts
Viaarxiv icon

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Add code
Bookmark button
Alert button
Oct 26, 2021
Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei

Figure 1 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 2 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 3 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Figure 4 for s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning
Viaarxiv icon