Picture for Kush R. Varshney

Kush R. Varshney

Empathy and the Right to Be an Exception: What LLMs Can and Cannot Do

Add code
Jan 25, 2024
Viaarxiv icon

Decolonial AI Alignment: Viśesadharma, Argument, and Artistic Expression

Add code
Sep 10, 2023
Figure 1 for Decolonial AI Alignment: Viśesadharma, Argument, and Artistic Expression
Viaarxiv icon

Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models

Add code
May 22, 2023
Figure 1 for Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
Figure 2 for Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
Figure 3 for Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
Figure 4 for Keeping Up with the Language Models: Robustness-Bias Interplay in NLI Data and Models
Viaarxiv icon

Towards Healthy AI: Large Language Models Need Therapists Too

Add code
Apr 02, 2023
Viaarxiv icon

Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions

Add code
Feb 17, 2023
Figure 1 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 2 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 3 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Figure 4 for Function Composition in Trustworthy Machine Learning: Implementation Choices, Insights, and Questions
Viaarxiv icon

Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting

Add code
Dec 13, 2022
Figure 1 for Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
Figure 2 for Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
Figure 3 for Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
Figure 4 for Fair Infinitesimal Jackknife: Mitigating the Influence of Biased Training Data Points Without Refitting
Viaarxiv icon

On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach

Add code
Nov 02, 2022
Figure 1 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 2 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 3 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Figure 4 for On the Safety of Interpretable Machine Learning: A Maximum Deviation Approach
Viaarxiv icon

Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models

Add code
Oct 13, 2022
Figure 1 for Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
Figure 2 for Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
Figure 3 for Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
Figure 4 for Equi-Tuning: Group Equivariant Fine-Tuning of Pretrained Models
Viaarxiv icon

Minimax AUC Fairness: Efficient Algorithm with Provable Convergence

Add code
Aug 22, 2022
Figure 1 for Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Figure 2 for Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Figure 3 for Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Figure 4 for Minimax AUC Fairness: Efficient Algorithm with Provable Convergence
Viaarxiv icon

Differentially Private SGDA for Minimax Problems

Add code
Jan 22, 2022
Figure 1 for Differentially Private SGDA for Minimax Problems
Figure 2 for Differentially Private SGDA for Minimax Problems
Viaarxiv icon