Alert button
Picture for Jilin Chen

Jilin Chen

Alert button

Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning

Add code
Bookmark button
Alert button
Jun 25, 2023
Xiao Ma, Swaroop Mishra, Ahmad Beirami, Alex Beutel, Jilin Chen

Figure 1 for Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning
Figure 2 for Let's Do a Thought Experiment: Using Counterfactuals to Improve Moral Reasoning
Viaarxiv icon

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

Add code
Bookmark button
Alert button
May 22, 2023
Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel

Figure 1 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 2 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 3 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 4 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Viaarxiv icon

Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers

Add code
Bookmark button
Alert button
Oct 28, 2022
Jieyu Zhao, Xuezhi Wang, Yao Qin, Jilin Chen, Kai-Wei Chang

Figure 1 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 2 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 3 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Figure 4 for Investigating Ensemble Methods for Model Robustness Improvement of Text Classifiers
Viaarxiv icon

A Human-ML Collaboration Framework for Improving Video Content Reviews

Add code
Bookmark button
Alert button
Oct 18, 2022
Meghana Deodhar, Xiao Ma, Yixin Cai, Alex Koes, Alex Beutel, Jilin Chen

Figure 1 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 2 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 3 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 4 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Viaarxiv icon

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Add code
Bookmark button
Alert button
Oct 14, 2022
Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

Figure 1 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 2 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 3 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 4 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Viaarxiv icon

Flexible text generation for counterfactual fairness probing

Add code
Bookmark button
Alert button
Jun 28, 2022
Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

Figure 1 for Flexible text generation for counterfactual fairness probing
Figure 2 for Flexible text generation for counterfactual fairness probing
Figure 3 for Flexible text generation for counterfactual fairness probing
Figure 4 for Flexible text generation for counterfactual fairness probing
Viaarxiv icon

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

Add code
Bookmark button
Alert button
Jun 04, 2021
Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

Figure 1 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 2 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 3 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 4 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Viaarxiv icon

Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective

Add code
Bookmark button
Alert button
May 20, 2021
Flavien Prost, Pranjal Awasthi, Nick Blumm, Aditee Kumthekar, Trevor Potter, Li Wei, Xuezhi Wang, Ed H. Chi, Jilin Chen, Alex Beutel

Figure 1 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 2 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 3 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 4 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Viaarxiv icon

Measuring Recommender System Effects with Simulated Users

Add code
Bookmark button
Alert button
Jan 12, 2021
Sirui Yao, Yoni Halpern, Nithum Thain, Xuezhi Wang, Kang Lee, Flavien Prost, Ed H. Chi, Jilin Chen, Alex Beutel

Figure 1 for Measuring Recommender System Effects with Simulated Users
Figure 2 for Measuring Recommender System Effects with Simulated Users
Figure 3 for Measuring Recommender System Effects with Simulated Users
Figure 4 for Measuring Recommender System Effects with Simulated Users
Viaarxiv icon

Measuring and Reducing Gendered Correlations in Pre-trained Models

Add code
Bookmark button
Alert button
Oct 12, 2020
Kellie Webster, Xuezhi Wang, Ian Tenney, Alex Beutel, Emily Pitler, Ellie Pavlick, Jilin Chen, Slav Petrov

Figure 1 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 2 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 3 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Figure 4 for Measuring and Reducing Gendered Correlations in Pre-trained Models
Viaarxiv icon