Picture for Jilin Chen

Jilin Chen

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Add code
Jul 11, 2023
Viaarxiv icon

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

Add code
Jun 25, 2023
Viaarxiv icon

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

Add code
May 22, 2023
Viaarxiv icon

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

Add code
Oct 28, 2022
Viaarxiv icon

A Human-ML Collaboration Framework for Improving Video Content Reviews

Add code
Oct 18, 2022
Figure 1 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 2 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 3 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 4 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Viaarxiv icon

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Add code
Oct 14, 2022
Figure 1 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 2 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 3 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 4 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Viaarxiv icon

Flexible text generation for counterfactual fairness probing

Add code
Jun 28, 2022
Figure 1 for Flexible text generation for counterfactual fairness probing
Figure 2 for Flexible text generation for counterfactual fairness probing
Figure 3 for Flexible text generation for counterfactual fairness probing
Figure 4 for Flexible text generation for counterfactual fairness probing
Viaarxiv icon

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

Add code
Jun 04, 2021
Figure 1 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 2 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 3 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 4 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Viaarxiv icon

Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective

Add code
May 20, 2021
Figure 1 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 2 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 3 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 4 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Viaarxiv icon

Measuring Recommender System Effects with Simulated Users

Add code
Jan 12, 2021
Figure 1 for Measuring Recommender System Effects with Simulated Users
Figure 2 for Measuring Recommender System Effects with Simulated Users
Figure 3 for Measuring Recommender System Effects with Simulated Users
Figure 4 for Measuring Recommender System Effects with Simulated Users
Viaarxiv icon