Alert button
Picture for Samuel R. Bowman

Samuel R. Bowman

Alert button

Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)

Add code
Bookmark button
Alert button
Oct 11, 2020
Alex Warstadt, Yian Zhang, Haau-Sing Li, Haokun Liu, Samuel R. Bowman

Figure 1 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 2 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 3 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Figure 4 for Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)
Viaarxiv icon

Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data

Add code
Bookmark button
Alert button
Oct 09, 2020
William Huang, Haokun Liu, Samuel R. Bowman

Figure 1 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 2 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 3 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Viaarxiv icon

Precise Task Formalization Matters in Winograd Schema Evaluations

Add code
Bookmark button
Alert button
Oct 08, 2020
Haokun Liu, William Huang, Dhara A. Mungra, Samuel R. Bowman

Figure 1 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 2 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 3 for Precise Task Formalization Matters in Winograd Schema Evaluations
Viaarxiv icon

CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models

Add code
Bookmark button
Alert button
Sep 30, 2020
Nikita Nangia, Clara Vania, Rasika Bhalerao, Samuel R. Bowman

Figure 1 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 2 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 3 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Figure 4 for CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
Viaarxiv icon

Can neural networks acquire a structural bias from raw linguistic data?

Add code
Bookmark button
Alert button
Jul 14, 2020
Alex Warstadt, Samuel R. Bowman

Figure 1 for Can neural networks acquire a structural bias from raw linguistic data?
Figure 2 for Can neural networks acquire a structural bias from raw linguistic data?
Figure 3 for Can neural networks acquire a structural bias from raw linguistic data?
Viaarxiv icon

Self-Training for Unsupervised Parsing with PRPN

Add code
Bookmark button
Alert button
May 27, 2020
Anhad Mohananey, Katharina Kann, Samuel R. Bowman

Figure 1 for Self-Training for Unsupervised Parsing with PRPN
Figure 2 for Self-Training for Unsupervised Parsing with PRPN
Figure 3 for Self-Training for Unsupervised Parsing with PRPN
Figure 4 for Self-Training for Unsupervised Parsing with PRPN
Viaarxiv icon

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Add code
Bookmark button
Alert button
May 26, 2020
Jason Phang, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Iacer Calixto, Samuel R. Bowman

Figure 1 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 2 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 3 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 4 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Viaarxiv icon

Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?

Add code
Bookmark button
Alert button
May 09, 2020
Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, Samuel R. Bowman

Figure 1 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 2 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 3 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 4 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Viaarxiv icon

Learning to Learn Morphological Inflection for Resource-Poor Languages

Add code
Bookmark button
Alert button
Apr 28, 2020
Katharina Kann, Samuel R. Bowman, Kyunghyun Cho

Figure 1 for Learning to Learn Morphological Inflection for Resource-Poor Languages
Figure 2 for Learning to Learn Morphological Inflection for Resource-Poor Languages
Figure 3 for Learning to Learn Morphological Inflection for Resource-Poor Languages
Figure 4 for Learning to Learn Morphological Inflection for Resource-Poor Languages
Viaarxiv icon