Alert button
Picture for Shuohang Wang

Shuohang Wang

Alert button

Violet

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Add code
Bookmark button
Alert button
Nov 03, 2021
Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Nanyun, Peng, Zicheng Liu, Michael Zeng

Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

Leveraging Knowledge in Multilingual Commonsense Reasoning

Add code
Bookmark button
Alert button
Oct 16, 2021
Yuwei Fang, Shuohang Wang, Yichong Xu, Ruochen Xu, Siqi Sun, Chenguang Zhu, Michael Zeng

Figure 1 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 2 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 3 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 4 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Viaarxiv icon

NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset

Add code
Bookmark button
Alert button
Oct 14, 2021
Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim

Figure 1 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 2 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 3 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 4 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Viaarxiv icon

Dict-BERT: Enhancing Language Model Pre-training with Dictionary

Add code
Bookmark button
Alert button
Oct 13, 2021
Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, Meng Jiang

Figure 1 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 2 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 3 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 4 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Viaarxiv icon

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

Add code
Bookmark button
Alert button
Oct 08, 2021
Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

Figure 1 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 2 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 3 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 4 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Viaarxiv icon

Want To Reduce Labeling Cost? GPT-3 Can Help

Add code
Bookmark button
Alert button
Aug 30, 2021
Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng

Figure 1 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 2 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 3 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 4 for Want To Reduce Labeling Cost? GPT-3 Can Help
Viaarxiv icon

Playing Lottery Tickets with Vision and Language

Add code
Bookmark button
Alert button
Apr 23, 2021
Zhe Gan, Yen-Chun Chen, Linjie Li, Tianlong Chen, Yu Cheng, Shuohang Wang, Jingjing Liu

Figure 1 for Playing Lottery Tickets with Vision and Language
Figure 2 for Playing Lottery Tickets with Vision and Language
Figure 3 for Playing Lottery Tickets with Vision and Language
Figure 4 for Playing Lottery Tickets with Vision and Language
Viaarxiv icon

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval

Add code
Bookmark button
Alert button
Apr 11, 2021
Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, Jingjing Liu

Figure 1 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 2 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 3 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Figure 4 for LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval
Viaarxiv icon

UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

Add code
Bookmark button
Alert button
Apr 01, 2021
Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, Jingjing Liu

Figure 1 for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
Figure 2 for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
Figure 3 for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
Figure 4 for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training
Viaarxiv icon