Picture for Amit Deshpande

Amit Deshpande

On the Power of Randomization in Fair Classification and Representation

Add code
Jun 05, 2024
Viaarxiv icon

NICE: To Optimize In-Context Examples or Not?

Add code
Feb 16, 2024
Figure 1 for NICE: To Optimize In-Context Examples or Not?
Figure 2 for NICE: To Optimize In-Context Examples or Not?
Figure 3 for NICE: To Optimize In-Context Examples or Not?
Figure 4 for NICE: To Optimize In-Context Examples or Not?
Viaarxiv icon

How Far Can Fairness Constraints Help Recover From Biased Data?

Add code
Dec 16, 2023
Figure 1 for How Far Can Fairness Constraints Help Recover From Biased Data?
Viaarxiv icon

Rethinking Robustness of Model Attributions

Add code
Dec 16, 2023
Figure 1 for Rethinking Robustness of Model Attributions
Figure 2 for Rethinking Robustness of Model Attributions
Figure 3 for Rethinking Robustness of Model Attributions
Figure 4 for Rethinking Robustness of Model Attributions
Viaarxiv icon

Improved Outlier Robust Seeding for k-means

Add code
Sep 06, 2023
Figure 1 for Improved Outlier Robust Seeding for k-means
Figure 2 for Improved Outlier Robust Seeding for k-means
Figure 3 for Improved Outlier Robust Seeding for k-means
Figure 4 for Improved Outlier Robust Seeding for k-means
Viaarxiv icon

Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness

Add code
Aug 25, 2023
Figure 1 for Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Figure 2 for Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Figure 3 for Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Figure 4 for Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and Ex-Post Fairness
Viaarxiv icon

Sampling Individually-Fair Rankings that are Always Group Fair

Add code
Jun 21, 2023
Figure 1 for Sampling Individually-Fair Rankings that are Always Group Fair
Figure 2 for Sampling Individually-Fair Rankings that are Always Group Fair
Figure 3 for Sampling Individually-Fair Rankings that are Always Group Fair
Figure 4 for Sampling Individually-Fair Rankings that are Always Group Fair
Viaarxiv icon

Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes

Add code
Jun 19, 2023
Figure 1 for Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
Figure 2 for Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
Figure 3 for Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
Figure 4 for Causal Effect Regularization: Automated Detection and Removal of Spurious Attributes
Viaarxiv icon

On Testing and Comparing Fair classifiers under Data Bias

Add code
Feb 12, 2023
Figure 1 for On Testing and Comparing Fair classifiers under Data Bias
Figure 2 for On Testing and Comparing Fair classifiers under Data Bias
Figure 3 for On Testing and Comparing Fair classifiers under Data Bias
Figure 4 for On Testing and Comparing Fair classifiers under Data Bias
Viaarxiv icon

Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models

Add code
Nov 11, 2022
Figure 1 for Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
Figure 2 for Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
Figure 3 for Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
Figure 4 for Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
Viaarxiv icon