Picture for Samuel R. Bowman

Samuel R. Bowman

Shammie

What Will it Take to Fix Benchmarking in Natural Language Understanding?

Add code
Apr 10, 2021
Figure 1 for What Will it Take to Fix Benchmarking in Natural Language Understanding?
Viaarxiv icon

When Do You Need Billions of Words of Pretraining Data?

Add code
Nov 10, 2020
Figure 1 for When Do You Need Billions of Words of Pretraining Data?
Figure 2 for When Do You Need Billions of Words of Pretraining Data?
Figure 3 for When Do You Need Billions of Words of Pretraining Data?
Figure 4 for When Do You Need Billions of Words of Pretraining Data?
Viaarxiv icon

Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options

Add code
Oct 13, 2020
Figure 1 for Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Figure 2 for Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Figure 3 for Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Figure 4 for Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Viaarxiv icon

Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)

Add code
Oct 11, 2020
Figure 1 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 2 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 3 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 4 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Viaarxiv icon

Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data

Add code
Oct 09, 2020
Figure 1 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 2 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 3 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Viaarxiv icon

Precise Task Formalization Matters in Winograd Schema Evaluations

Add code
Oct 08, 2020
Figure 1 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 2 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 3 for Precise Task Formalization Matters in Winograd Schema Evaluations
Viaarxiv icon

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Add code
Sep 30, 2020
Figure 1 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 2 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 3 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 4 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Viaarxiv icon

Can neural networks acquire a structural bias from raw linguistic data?

Add code
Jul 14, 2020
Figure 1 for Can neural networks acquire a structural bias from raw linguistic data?
Figure 2 for Can neural networks acquire a structural bias from raw linguistic data?
Figure 3 for Can neural networks acquire a structural bias from raw linguistic data?
Viaarxiv icon

Self-Training for Unsupervised Parsing with PRPN

Add code
May 27, 2020
Figure 1 for Self-Training for Unsupervised Parsing with PRPN
Figure 2 for Self-Training for Unsupervised Parsing with PRPN
Figure 3 for Self-Training for Unsupervised Parsing with PRPN
Figure 4 for Self-Training for Unsupervised Parsing with PRPN
Viaarxiv icon

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Add code
May 26, 2020
Figure 1 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 2 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 3 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 4 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Viaarxiv icon