Picture for Shuohang Wang

Shuohang Wang

Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data

Add code
Mar 16, 2022
Figure 1 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 2 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 3 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Figure 4 for Training Data is More Valuable than You Think: A Simple and Effective Method by Retrieving from Training Data
Viaarxiv icon

AdaPrompt: Adaptive Model Training for Prompt-based NLP

Add code
Feb 10, 2022
Figure 1 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 2 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 3 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Figure 4 for AdaPrompt: Adaptive Model Training for Prompt-based NLP
Viaarxiv icon

CLIP-Event: Connecting Text and Images with Event Structures

Add code
Jan 13, 2022
Figure 1 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 2 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 3 for CLIP-Event: Connecting Text and Images with Event Structures
Figure 4 for CLIP-Event: Connecting Text and Images with Event Structures
Viaarxiv icon

Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention

Add code
Dec 14, 2021
Figure 1 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 2 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 3 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Figure 4 for Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention
Viaarxiv icon

MLP Architectures for Vision-and-Language Modeling: An Empirical Study

Add code
Dec 08, 2021
Figure 1 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 2 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 3 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Figure 4 for MLP Architectures for Vision-and-Language Modeling: An Empirical Study
Viaarxiv icon

An Empirical Study of Training End-to-End Vision-and-Language Transformers

Add code
Nov 25, 2021
Figure 1 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 2 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 3 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Figure 4 for An Empirical Study of Training End-to-End Vision-and-Language Transformers
Viaarxiv icon

Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models

Add code
Nov 04, 2021
Figure 1 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 2 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 3 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Figure 4 for Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models
Viaarxiv icon

Leveraging Knowledge in Multilingual Commonsense Reasoning

Add code
Oct 16, 2021
Figure 1 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 2 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 3 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Figure 4 for Leveraging Knowledge in Multilingual Commonsense Reasoning
Viaarxiv icon

NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset

Add code
Oct 14, 2021
Figure 1 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 2 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 3 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Figure 4 for NOAHQA: Numerical Reasoning with Interpretable Graph Question Answering Dataset
Viaarxiv icon

Dict-BERT: Enhancing Language Model Pre-training with Dictionary

Add code
Oct 13, 2021
Figure 1 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 2 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 3 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Figure 4 for Dict-BERT: Enhancing Language Model Pre-training with Dictionary
Viaarxiv icon