Picture for William Huang

William Huang

WheelPose: Data Synthesis Techniques to Improve Pose Estimation Performance on Wheelchair Users

Add code
Apr 25, 2024
Viaarxiv icon

Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair

Add code
Nov 16, 2021
Figure 1 for Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Figure 2 for Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Figure 3 for Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Figure 4 for Adversarially Constructed Evaluation Sets Are More Challenging, but May Not Be Fair
Viaarxiv icon

Types of Out-of-Distribution Texts and How to Detect Them

Add code
Sep 14, 2021
Figure 1 for Types of Out-of-Distribution Texts and How to Detect Them
Figure 2 for Types of Out-of-Distribution Texts and How to Detect Them
Figure 3 for Types of Out-of-Distribution Texts and How to Detect Them
Figure 4 for Types of Out-of-Distribution Texts and How to Detect Them
Viaarxiv icon

Comparing Test Sets with Item Response Theory

Add code
Jun 01, 2021
Figure 1 for Comparing Test Sets with Item Response Theory
Figure 2 for Comparing Test Sets with Item Response Theory
Figure 3 for Comparing Test Sets with Item Response Theory
Figure 4 for Comparing Test Sets with Item Response Theory
Viaarxiv icon

Does Putting a Linguist in the Loop Improve NLU Data Collection?

Add code
Apr 15, 2021
Figure 1 for Does Putting a Linguist in the Loop Improve NLU Data Collection?
Figure 2 for Does Putting a Linguist in the Loop Improve NLU Data Collection?
Figure 3 for Does Putting a Linguist in the Loop Improve NLU Data Collection?
Figure 4 for Does Putting a Linguist in the Loop Improve NLU Data Collection?
Viaarxiv icon

Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data

Add code
Oct 09, 2020
Figure 1 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 2 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Figure 3 for Counterfactually-Augmented SNLI Training Data Does Not Yield Better Generalization Than Unaugmented Data
Viaarxiv icon

Precise Task Formalization Matters in Winograd Schema Evaluations

Add code
Oct 08, 2020
Figure 1 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 2 for Precise Task Formalization Matters in Winograd Schema Evaluations
Figure 3 for Precise Task Formalization Matters in Winograd Schema Evaluations
Viaarxiv icon