Alert button
Picture for Jason Phang

Jason Phang

Alert button

BBQ: A Hand-Built Bias Benchmark for Question Answering

Add code
Bookmark button
Alert button
Oct 15, 2021
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, Samuel R. Bowman

Figure 1 for BBQ: A Hand-Built Bias Benchmark for Question Answering
Figure 2 for BBQ: A Hand-Built Bias Benchmark for Question Answering
Figure 3 for BBQ: A Hand-Built Bias Benchmark for Question Answering
Figure 4 for BBQ: A Hand-Built Bias Benchmark for Question Answering
Viaarxiv icon

Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers

Add code
Bookmark button
Alert button
Sep 20, 2021
Jason Phang, Haokun Liu, Samuel R. Bowman

Figure 1 for Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Figure 2 for Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Figure 3 for Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Figure 4 for Fine-Tuned Transformers Show Clusters of Similar Representations Across Layers
Viaarxiv icon

Comparing Test Sets with Item Response Theory

Add code
Bookmark button
Alert button
Jun 01, 2021
Clara Vania, Phu Mon Htut, William Huang, Dhara Mungra, Richard Yuanzhe Pang, Jason Phang, Haokun Liu, Kyunghyun Cho, Samuel R. Bowman

Figure 1 for Comparing Test Sets with Item Response Theory
Figure 2 for Comparing Test Sets with Item Response Theory
Figure 3 for Comparing Test Sets with Item Response Theory
Figure 4 for Comparing Test Sets with Item Response Theory
Viaarxiv icon

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

Add code
Bookmark button
Alert button
Dec 31, 2020
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy

Figure 1 for The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Figure 2 for The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Figure 3 for The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Figure 4 for The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Viaarxiv icon

Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability

Add code
Bookmark button
Alert button
Oct 19, 2020
Jason Phang, Jungkyu Park, Krzysztof J. Geras

Figure 1 for Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Figure 2 for Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Figure 3 for Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Figure 4 for Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability
Viaarxiv icon

Reducing false-positive biopsies with deep neural networks that utilize local and global information in screening mammograms

Add code
Bookmark button
Alert button
Sep 19, 2020
Nan Wu, Zhe Huang, Yiqiu Shen, Jungkyu Park, Jason Phang, Taro Makino, S. Gene Kim, Kyunghyun Cho, Laura Heacock, Linda Moy, Krzysztof J. Geras

Figure 1 for Reducing false-positive biopsies with deep neural networks that utilize local and global information in screening mammograms
Figure 2 for Reducing false-positive biopsies with deep neural networks that utilize local and global information in screening mammograms
Figure 3 for Reducing false-positive biopsies with deep neural networks that utilize local and global information in screening mammograms
Figure 4 for Reducing false-positive biopsies with deep neural networks that utilize local and global information in screening mammograms
Viaarxiv icon

English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too

Add code
Bookmark button
Alert button
May 26, 2020
Jason Phang, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Iacer Calixto, Samuel R. Bowman

Figure 1 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 2 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 3 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Figure 4 for English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
Viaarxiv icon

Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?

Add code
Bookmark button
Alert button
May 09, 2020
Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, Samuel R. Bowman

Figure 1 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 2 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 3 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Figure 4 for Intermediate-Task Transfer Learning with Pretrained Models for Natural Language Understanding: When and Why Does It Work?
Viaarxiv icon

jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models

Add code
Bookmark button
Alert button
Mar 04, 2020
Yada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, Samuel R. Bowman

Figure 1 for jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models
Figure 2 for jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models
Figure 3 for jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models
Viaarxiv icon