Picture for Weixin Liang

Weixin Liang

Can large language models provide useful feedback on research papers? A large-scale empirical analysis

Add code
Oct 03, 2023
Viaarxiv icon

OpenDataVal: a Unified Benchmark for Data Valuation

Add code
Jun 18, 2023
Viaarxiv icon

On the nonlinear correlation of ML performance between data subpopulations

Add code
May 04, 2023
Viaarxiv icon

GPT detectors are biased against non-native English writers

Add code
Apr 18, 2023
Viaarxiv icon

SEAL : Interactive Tool for Systematic Error Analysis and Labeling

Add code
Oct 11, 2022
Figure 1 for SEAL : Interactive Tool for Systematic Error Analysis and Labeling
Figure 2 for SEAL : Interactive Tool for Systematic Error Analysis and Labeling
Figure 3 for SEAL : Interactive Tool for Systematic Error Analysis and Labeling
Figure 4 for SEAL : Interactive Tool for Systematic Error Analysis and Labeling
Viaarxiv icon

Data Budgeting for Machine Learning

Add code
Oct 03, 2022
Figure 1 for Data Budgeting for Machine Learning
Figure 2 for Data Budgeting for Machine Learning
Figure 3 for Data Budgeting for Machine Learning
Figure 4 for Data Budgeting for Machine Learning
Viaarxiv icon

GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language

Add code
Jun 30, 2022
Figure 1 for GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language
Figure 2 for GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language
Figure 3 for GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language
Viaarxiv icon

Disparities in Dermatology AI Performance on a Diverse, Curated Clinical Image Set

Add code
Mar 15, 2022
Viaarxiv icon

Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning

Add code
Mar 03, 2022
Figure 1 for Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
Figure 2 for Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
Figure 3 for Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
Figure 4 for Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning
Viaarxiv icon

MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts

Add code
Feb 14, 2022
Figure 1 for MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts
Figure 2 for MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts
Figure 3 for MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts
Figure 4 for MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts
Viaarxiv icon