Alert button
Picture for Sanjiv Das

Sanjiv Das

Alert button

Diverse Counterfactual Explanations for Anomaly Detection in Time Series

Add code
Bookmark button
Alert button
Mar 21, 2022
Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau

Figure 1 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 2 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 3 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 4 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Viaarxiv icon

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Add code
Bookmark button
Alert button
Sep 07, 2021
Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi

Figure 1 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 2 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 3 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 4 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Viaarxiv icon

On the Lack of Robust Interpretability of Neural Text Classifiers

Add code
Bookmark button
Alert button
Jun 08, 2021
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, Krishnaram Kenthapadi

Figure 1 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 2 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 3 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 4 for On the Lack of Robust Interpretability of Neural Text Classifiers
Viaarxiv icon