Alert button
Picture for Ananth Balashankar

Ananth Balashankar

Alert button

Reuse Your Rewards: Reward Model Transfer for Zero-Shot Cross-Lingual Alignment

Add code
Bookmark button
Alert button
Apr 18, 2024
Zhaofeng Wu, Ananth Balashankar, Yoon Kim, Jacob Eisenstein, Ahmad Beirami

Viaarxiv icon

Improving Few-shot Generalization of Safety Classifiers via Data Augmented Parameter-Efficient Fine-Tuning

Add code
Bookmark button
Alert button
Oct 25, 2023
Ananth Balashankar, Xiao Ma, Aradhana Sinha, Ahmad Beirami, Yao Qin, Jilin Chen, Alex Beutel

Viaarxiv icon

Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks

Add code
Bookmark button
Alert button
Oct 25, 2023
Aradhana Sinha, Ananth Balashankar, Ahmad Beirami, Thi Avrahami, Jilin Chen, Alex Beutel

Viaarxiv icon

Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals

Add code
Bookmark button
Alert button
May 22, 2023
Ananth Balashankar, Xuezhi Wang, Yao Qin, Ben Packer, Nithum Thain, Jilin Chen, Ed H. Chi, Alex Beutel

Figure 1 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 2 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 3 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Figure 4 for Improving Classifier Robustness through Active Generation of Pairwise Counterfactuals
Viaarxiv icon

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

Add code
Bookmark button
Alert button
Feb 02, 2023
Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin

Figure 1 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 2 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 3 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 4 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Viaarxiv icon

Fine-grained prediction of food insecurity using news streams

Add code
Bookmark button
Alert button
Nov 17, 2021
Ananth Balashankar, Lakshminarayanan Subramanian, Samuel P. Fraiberger

Figure 1 for Fine-grained prediction of food insecurity using news streams
Figure 2 for Fine-grained prediction of food insecurity using news streams
Figure 3 for Fine-grained prediction of food insecurity using news streams
Viaarxiv icon

Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling

Add code
Bookmark button
Alert button
Oct 01, 2020
Yan Shvartzshnaider, Ananth Balashankar, Vikas Patidar, Thomas Wies, Lakshminarayanan Subramanian

Figure 1 for Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling
Figure 2 for Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling
Figure 3 for Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling
Figure 4 for Beyond The Text: Analysis of Privacy Statements through Syntactic and Semantic Role Labeling
Viaarxiv icon

What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers

Add code
Bookmark button
Alert button
Oct 30, 2019
Ananth Balashankar, Alyssa Lees, Chris Welty, Lakshminarayanan Subramanian

Figure 1 for What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers
Figure 2 for What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers
Figure 3 for What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers
Figure 4 for What is Fair? Exploring Pareto-Efficiency for Fairness Constrained Classifiers
Viaarxiv icon

Fairness Sample Complexity and the Case for Human Intervention

Add code
Bookmark button
Alert button
Oct 24, 2019
Ananth Balashankar, Alyssa Lees

Figure 1 for Fairness Sample Complexity and the Case for Human Intervention
Figure 2 for Fairness Sample Complexity and the Case for Human Intervention
Figure 3 for Fairness Sample Complexity and the Case for Human Intervention
Figure 4 for Fairness Sample Complexity and the Case for Human Intervention
Viaarxiv icon