Picture for Krishnaram Kenthapadi

Krishnaram Kenthapadi

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

Add code
Apr 09, 2022
Figure 1 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 2 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 3 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 4 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Viaarxiv icon

Diverse Counterfactual Explanations for Anomaly Detection in Time Series

Add code
Mar 21, 2022
Figure 1 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 2 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 3 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 4 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Viaarxiv icon

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

Add code
Mar 16, 2022
Figure 1 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 2 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 3 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 4 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Viaarxiv icon

Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

Add code
Feb 17, 2022
Figure 1 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 2 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 3 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 4 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Viaarxiv icon

Designing Closed Human-in-the-loop Deferral Pipelines

Add code
Feb 09, 2022
Figure 1 for Designing Closed Human-in-the-loop Deferral Pipelines
Figure 2 for Designing Closed Human-in-the-loop Deferral Pipelines
Figure 3 for Designing Closed Human-in-the-loop Deferral Pipelines
Figure 4 for Designing Closed Human-in-the-loop Deferral Pipelines
Viaarxiv icon

More Than Words: Towards Better Quality Interpretations of Text Classifiers

Add code
Dec 23, 2021
Figure 1 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 2 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 3 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 4 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Viaarxiv icon

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

Add code
Dec 13, 2021
Figure 1 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 2 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 3 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 4 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Viaarxiv icon

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Add code
Sep 07, 2021
Figure 1 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 2 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 3 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 4 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Viaarxiv icon

Multiaccurate Proxies for Downstream Fairness

Add code
Jul 09, 2021
Figure 1 for Multiaccurate Proxies for Downstream Fairness
Figure 2 for Multiaccurate Proxies for Downstream Fairness
Figure 3 for Multiaccurate Proxies for Downstream Fairness
Figure 4 for Multiaccurate Proxies for Downstream Fairness
Viaarxiv icon

On the Lack of Robust Interpretability of Neural Text Classifiers

Add code
Jun 08, 2021
Figure 1 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 2 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 3 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 4 for On the Lack of Robust Interpretability of Neural Text Classifiers
Viaarxiv icon