Alert button
Picture for Shuohang Wang

Shuohang Wang

Alert button

CLIP-Event: Connecting Text and Images with Event Structures

Add code
Bookmark button
Alert button
Jan 13, 2022
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, Shih-Fu Chang

Figure 1 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 2 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 3 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 4 for CLIP-Event: Connecting Text and Images with Event Structures
Viaarxiv icon

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Add code
Bookmark button
Alert button
Dec 14, 2021
Yichong Xu, Chenguang Zhu, Shuohang Wang, Siqi Sun, Hao Cheng, Xiaodong Liu, Jianfeng Gao, Pengcheng He, Michael Zeng, Xuedong Huang

Figure 1 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 2 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 3 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 4 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Viaarxiv icon

MLP Architectures for Vision-and-Language Modeling: An Empirical Study

Add code
Bookmark button
Alert button
Dec 08, 2021
Yixin Nie, Linjie Li, Zhe Gan, Shuohang Wang, Chenguang Zhu, Michael Zeng, Zicheng Liu, Mohit Bansal, Lijuan Wang

Figure 1 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 2 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 3 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 4 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Viaarxiv icon

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Add code
Bookmark button
Alert button
Nov 25, 2021
Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Zicheng Liu, Michael Zeng

Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Add code
Bookmark button
Alert button
Nov 04, 2021
Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li

Figure 1 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 2 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 3 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 4 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Viaarxiv icon

Leveraging Knowledge in Multilingual Commonsense Reasoning

Add code
Bookmark button
Alert button
Oct 16, 2021
Yuwei Fang, Shuohang Wang, Yichong Xu, Ruochen Xu, Siqi Sun, Chenguang Zhu, Michael Zeng

Figure 1 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 2 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 3 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 4 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Viaarxiv icon

NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset

Add code
Bookmark button
Alert button
Oct 14, 2021
Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim

Figure 1 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 2 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 3 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 4 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Viaarxiv icon

Dict-BERT: Enhancing Language Model Pre-training with Dictionary

Add code
Bookmark button
Alert button
Oct 13, 2021
Wenhao Yu, Chenguang Zhu, Yuwei Fang, Donghan Yu, Shuohang Wang, Yichong Xu, Michael Zeng, Meng Jiang

Figure 1 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 2 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 3 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 4 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Viaarxiv icon

KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering

Add code
Bookmark button
Alert button
Oct 08, 2021
Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, Michael Zeng

Figure 1 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 2 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 3 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Figure 4 for KG-FiD: Infusing Knowledge Graph in Fusion-in-Decoder for Open-Domain Question Answering
Viaarxiv icon

Want To Reduce Labeling Cost? GPT-3 Can Help

Add code
Bookmark button
Alert button
Aug 30, 2021
Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng

Figure 1 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 2 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 3 for Want To Reduce Labeling Cost? GPT-3 Can Help
Figure 4 for Want To Reduce Labeling Cost? GPT-3 Can Help
Viaarxiv icon