Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Language Models are General-Purpose Interfaces


Jun 13, 2022
Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shuming Ma, Furu Wei

* 32 pages. The first three authors contribute equally 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

VL-BEiT: Generative Vision-Language Pretraining


Jun 02, 2022
Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

TimeReplayer: Unlocking the Potential of Event Cameras for Video Interpolation


Mar 25, 2022
Weihua He, Kaichao You, Zhendong Qiao, Xu Jia, Ziyang Zhang, Wenhui Wang, Huchuan Lu, Yaoyuan Wang, Jianxing Liao

* Accepted to CVPR 2022, project page https://sites.google.com/view/timereplayer/ 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

AutoDistil: Few-shot Task-agnostic Neural Architecture Search for Distilling Large Language Models


Jan 29, 2022
Dongkuan Xu, Subhabrata Mukherjee, Xiaodong Liu, Debadeepta Dey, Wenhui Wang, Xiang Zhang, Ahmed Hassan Awadallah, Jianfeng Gao

* 13 pages, 4 figures, 10 tables 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Distilled Dual-Encoder Model for Vision-Language Understanding


Dec 16, 2021
Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei

* Work in progress 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts


Nov 03, 2021
Wenhui Wang, Hangbo Bao, Li Dong, Furu Wei

* Work in progress 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning


Oct 26, 2021
Hangbo Bao, Li Dong, Wenhui Wang, Nan Yang, Furu Wei

* Demo paper for the s2s-ft toolkit: https://github.com/microsoft/unilm/tree/master/s2s-ft 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Adapt-and-Distill: Developing Small, Fast and Effective Pretrained Language Models for Domains


Jun 29, 2021
Yunzhi Yao, Shaohan Huang, Wenhui Wang, Li Dong, Furu Wei

* accepted as ACL2021 Findings 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Consistency Regularization for Cross-Lingual Fine-Tuning


Jun 15, 2021
Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei

* ACL-2021 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
>>