Picture for Umang Bhatt

Umang Bhatt

Perspectives on Incorporating Expert Feedback into Model Updates

Add code
May 13, 2022
Figure 1 for Perspectives on Incorporating Expert Feedback into Model Updates
Figure 2 for Perspectives on Incorporating Expert Feedback into Model Updates
Figure 3 for Perspectives on Incorporating Expert Feedback into Model Updates
Figure 4 for Perspectives on Incorporating Expert Feedback into Model Updates
Viaarxiv icon

On the Utility of Prediction Sets in Human-AI Teams

Add code
May 03, 2022
Figure 1 for On the Utility of Prediction Sets in Human-AI Teams
Figure 2 for On the Utility of Prediction Sets in Human-AI Teams
Figure 3 for On the Utility of Prediction Sets in Human-AI Teams
Figure 4 for On the Utility of Prediction Sets in Human-AI Teams
Viaarxiv icon

Approximating Full Conformal Prediction at Scale via Influence Functions

Add code
Feb 02, 2022
Viaarxiv icon

Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates

Add code
Dec 09, 2021
Figure 1 for Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
Figure 2 for Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
Figure 3 for Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
Figure 4 for Diverse, Global and Amortised Counterfactual Explanations for Uncertainty Estimates
Viaarxiv icon

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

Add code
Jul 13, 2021
Figure 1 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 2 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 3 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 4 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Viaarxiv icon

Do Concept Bottleneck Models Learn as Intended?

Add code
May 10, 2021
Figure 1 for Do Concept Bottleneck Models Learn as Intended?
Figure 2 for Do Concept Bottleneck Models Learn as Intended?
Figure 3 for Do Concept Bottleneck Models Learn as Intended?
Figure 4 for Do Concept Bottleneck Models Learn as Intended?
Viaarxiv icon

δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates

Add code
May 08, 2021
Figure 1 for δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
Figure 2 for δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
Figure 3 for δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
Figure 4 for δ-CLUE: Diverse Sets of Explanations for Uncertainty Estimates
Viaarxiv icon

Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty

Add code
Nov 15, 2020
Figure 1 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 2 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 3 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Figure 4 for Uncertainty as a Form of Transparency: Measuring, Communicating, and Using Uncertainty
Viaarxiv icon

On the Fairness of Causal Algorithmic Recourse

Add code
Oct 14, 2020
Figure 1 for On the Fairness of Causal Algorithmic Recourse
Viaarxiv icon

Machine Learning Explainability for External Stakeholders

Add code
Jul 10, 2020
Viaarxiv icon