Picture for Krishnaram Kenthapadi

Krishnaram Kenthapadi

Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings

Add code
Dec 04, 2023
Figure 1 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 2 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 3 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Figure 4 for Measuring Distributional Shifts in Text: The Advantage of Language Model-Based Embeddings
Viaarxiv icon

Designing Closed-Loop Models for Task Allocation

Add code
May 31, 2023
Viaarxiv icon

Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users

Add code
Jul 06, 2022
Figure 1 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 2 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Figure 3 for Towards the Use of Saliency Maps for Explaining Low-Quality Electrocardiograms to End Users
Viaarxiv icon

Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases

Add code
Jun 25, 2022
Figure 1 for Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
Figure 2 for Visual Auditor: Interactive Visualization for Detection and Summarization of Model Biases
Viaarxiv icon

A Human-Centric Take on Model Monitoring

Add code
Jun 06, 2022
Figure 1 for A Human-Centric Take on Model Monitoring
Viaarxiv icon

Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks

Add code
Apr 09, 2022
Figure 1 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 2 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 3 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Figure 4 for Are Two Heads the Same as One? Identifying Disparate Treatment in Fair Neural Networks
Viaarxiv icon

Diverse Counterfactual Explanations for Anomaly Detection in Time Series

Add code
Mar 21, 2022
Figure 1 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 2 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 3 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 4 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Viaarxiv icon

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

Add code
Mar 16, 2022
Figure 1 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 2 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 3 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Figure 4 for COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Viaarxiv icon

Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness

Add code
Feb 17, 2022
Figure 1 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 2 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 3 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Figure 4 for Human-Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness
Viaarxiv icon

Designing Closed Human-in-the-loop Deferral Pipelines

Add code
Feb 09, 2022
Viaarxiv icon