Alert button
Picture for Alex Beutel

Alex Beutel

Alert button

Towards Robust Prompts on Vision-Language Models

Add code
Bookmark button
Alert button
Apr 17, 2023
Jindong Gu, Ahmad Beirami, Xuezhi Wang, Alex Beutel, Philip Torr, Yao Qin

Figure 1 for Towards Robust Prompts on Vision-Language Models
Figure 2 for Towards Robust Prompts on Vision-Language Models
Figure 3 for Towards Robust Prompts on Vision-Language Models
Figure 4 for Towards Robust Prompts on Vision-Language Models
Viaarxiv icon

What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel

Add code
Bookmark button
Alert button
Feb 22, 2023
Yao Qin, Xuezhi Wang, Balaji Lakshminarayanan, Ed H. Chi, Alex Beutel

Figure 1 for What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Figure 2 for What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Figure 3 for What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Figure 4 for What Are Effective Labels for Augmented Data? Improving Calibration and Robustness with AutoLabel
Viaarxiv icon

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

Add code
Bookmark button
Alert button
Feb 02, 2023
Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin

Figure 1 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 2 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 3 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 4 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Viaarxiv icon

Striving for data-model efficiency: Identifying data externalities on group performance

Add code
Bookmark button
Alert button
Nov 11, 2022
Esther Rolf, Ben Packer, Alex Beutel, Fernando Diaz

Figure 1 for Striving for data-model efficiency: Identifying data externalities on group performance
Figure 2 for Striving for data-model efficiency: Identifying data externalities on group performance
Figure 3 for Striving for data-model efficiency: Identifying data externalities on group performance
Viaarxiv icon

A Human-ML Collaboration Framework for Improving Video Content Reviews

Add code
Bookmark button
Alert button
Oct 18, 2022
Meghana Deodhar, Xiao Ma, Yixin Cai, Alex Koes, Alex Beutel, Jilin Chen

Figure 1 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 2 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 3 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Figure 4 for A Human-ML Collaboration Framework for Improving Video Content Reviews
Viaarxiv icon

Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations

Add code
Bookmark button
Alert button
Oct 14, 2022
Flavien Prost, Ben Packer, Jilin Chen, Li Wei, Pierre Kremp, Nicholas Blumm, Susan Wang, Tulsee Doshi, Tonia Osadebe, Lukasz Heldt, Ed H. Chi, Alex Beutel

Figure 1 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 2 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 3 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Figure 4 for Simpson's Paradox in Recommender Fairness: Reconciling differences between per-user and aggregated evaluations
Viaarxiv icon

Flexible text generation for counterfactual fairness probing

Add code
Bookmark button
Alert button
Jun 28, 2022
Zee Fryer, Vera Axelrod, Ben Packer, Alex Beutel, Jilin Chen, Kellie Webster

Figure 1 for Flexible text generation for counterfactual fairness probing
Figure 2 for Flexible text generation for counterfactual fairness probing
Figure 3 for Flexible text generation for counterfactual fairness probing
Figure 4 for Flexible text generation for counterfactual fairness probing
Viaarxiv icon

Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation

Add code
Bookmark button
Alert button
Oct 15, 2021
Yao Qin, Chiyuan Zhang, Ting Chen, Balaji Lakshminarayanan, Alex Beutel, Xuezhi Wang

Figure 1 for Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Figure 2 for Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Figure 3 for Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Figure 4 for Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Viaarxiv icon

Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning

Add code
Bookmark button
Alert button
Jun 04, 2021
Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H. Chi

Figure 1 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 2 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 3 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Figure 4 for Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task Learning
Viaarxiv icon

Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective

Add code
Bookmark button
Alert button
May 20, 2021
Flavien Prost, Pranjal Awasthi, Nick Blumm, Aditee Kumthekar, Trevor Potter, Li Wei, Xuezhi Wang, Ed H. Chi, Jilin Chen, Alex Beutel

Figure 1 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 2 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 3 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Figure 4 for Measuring Model Fairness under Noisy Covariates: A Theoretical Perspective
Viaarxiv icon