Alert button
Picture for Ben Packer

Ben Packer

Alert button

FRAPPÉ: A Post-Processing Framework for Group Fairness Regularization

Add code
Bookmark button
Alert button
Dec 05, 2023
Alexandru Ţifrea, Preethi Lahoti, Ben Packer, Yoni Halpern, Ahmad Beirami, Flavien Prost

Viaarxiv icon

Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting

Add code
Bookmark button
Alert button
Oct 25, 2023
Preethi Lahoti, Nicholas Blumm, Xiao Ma, Raghavendra Kotikalapudi, Sahitya Potluri, Qijun Tan, Hansa Srinivasan, Ben Packer, Ahmad Beirami, Alex Beutel, Jilin Chen

Figure 1 for Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Figure 2 for Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Figure 3 for Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Figure 4 for Improving Diversity of Demographic Representation in Large Language Models via Collective-Critiques and Self-Voting
Viaarxiv icon

Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification

Add code
Bookmark button
Alert button
Jul 11, 2023
James Atwood, Tina Tian, Ben Packer, Meghana Deodhar, Jilin Chen, Alex Beutel, Flavien Prost, Ahmad Beirami

Figure 1 for Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
Figure 2 for Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
Figure 3 for Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
Figure 4 for Towards A Scalable Solution for Improving Multi-Group Fairness in Compositional Classification
Viaarxiv icon

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

Add code
Bookmark button
Alert button
May 22, 2023
Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel

Figure 1 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 2 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 3 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 4 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Viaarxiv icon

Striving for data-model efficiency: Identifying data externalities on group performance

Add code
Bookmark button
Alert button
Nov 11, 2022
Esther Rolf, Ben Packer, Alex Beutel, Fernando Diaz

Figure 1 for Striving for data-model efficiency: Identifying data externalities on group performance
Figure 2 for Striving for data-model efficiency: Identifying data externalities on group performance
Figure 3 for Striving for data-model efficiency: Identifying data externalities on group performance
Viaarxiv icon

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Add code
Bookmark button
Alert button
Oct 14, 2022
Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

Figure 1 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 2 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 3 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 4 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Viaarxiv icon

Flexible text generation for counterfactual fairness probing

Add code
Bookmark button
Alert button
Jun 28, 2022
Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

Figure 1 for Flexible text generation for counterfactual fairness probing
Figure 2 for Flexible text generation for counterfactual fairness probing
Figure 3 for Flexible text generation for counterfactual fairness probing
Figure 4 for Flexible text generation for counterfactual fairness probing
Viaarxiv icon

Causally-motivated Shortcut Removal Using Auxiliary Labels

Add code
Bookmark button
Alert button
Jun 03, 2021
Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, Alexander D'Amour

Figure 1 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 2 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 3 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Figure 4 for Causally-motivated Shortcut Removal Using Auxiliary Labels
Viaarxiv icon

CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation

Add code
Bookmark button
Alert button
Oct 05, 2020
Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, Ed Chi

Figure 1 for CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
Figure 2 for CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
Figure 3 for CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
Figure 4 for CAT-Gen: Improving Robustness in NLP Models via Controlled Adversarial Text Generation
Viaarxiv icon