Picture for Sanghamitra Dutta

Sanghamitra Dutta

Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs

Add code
Jul 04, 2024
Figure 1 for Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs
Figure 2 for Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs
Figure 3 for Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs
Figure 4 for Quantifying Prediction Consistency Under Model Multiplicity in Tabular LLMs
Viaarxiv icon

Quantifying Spuriousness of Biased Datasets Using Partial Information Decomposition

Add code
Jun 29, 2024
Viaarxiv icon

A Unified View of Group Fairness Tradeoffs Using Partial Information Decomposition

Add code
Jun 07, 2024
Viaarxiv icon

Model Reconstruction Using Counterfactual Explanations: Mitigating the Decision Boundary Shift

Add code
May 08, 2024
Viaarxiv icon

REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values

Add code
Mar 13, 2024
Figure 1 for REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Figure 2 for REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Figure 3 for REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Figure 4 for REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values
Viaarxiv icon

Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition

Add code
Jul 21, 2023
Figure 1 for Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Figure 2 for Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Figure 3 for Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Figure 4 for Demystifying Local and Global Fairness Trade-offs in Federated Learning Using Partial Information Decomposition
Viaarxiv icon

Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees

Add code
May 19, 2023
Figure 1 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 2 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 3 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Figure 4 for Robust Counterfactual Explanations for Neural Networks With Probabilistic Guarantees
Viaarxiv icon

Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access

Add code
Feb 02, 2023
Figure 1 for Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
Figure 2 for Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
Figure 3 for Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
Figure 4 for Hyper-parameter Tuning for Fair Classification without Sensitive Attribute Access
Viaarxiv icon

Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity

Add code
Nov 03, 2022
Figure 1 for Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
Figure 2 for Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
Figure 3 for Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
Figure 4 for Can Querying for Bias Leak Protected Attributes? Achieving Privacy With Smooth Sensitivity
Viaarxiv icon

Robust Counterfactual Explanations for Tree-Based Ensembles

Add code
Jul 17, 2022
Figure 1 for Robust Counterfactual Explanations for Tree-Based Ensembles
Figure 2 for Robust Counterfactual Explanations for Tree-Based Ensembles
Figure 3 for Robust Counterfactual Explanations for Tree-Based Ensembles
Figure 4 for Robust Counterfactual Explanations for Tree-Based Ensembles
Viaarxiv icon