Alert button
Picture for Michele Donini

Michele Donini

Alert button

Explaining Probabilistic Models with Distributional Values

Add code
Bookmark button
Alert button
Feb 15, 2024
Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

Viaarxiv icon

Geographical Erasure in Language Generation

Add code
Bookmark button
Alert button
Oct 23, 2023
Pola Schwöbel, Jacek Golebiowski, Michele Donini, Cédric Archambeau, Danish Pruthi

Viaarxiv icon

Efficient fair PCA for fair representation learning

Add code
Bookmark button
Alert button
Feb 26, 2023
Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar

Figure 1 for Efficient fair PCA for fair representation learning
Figure 2 for Efficient fair PCA for fair representation learning
Figure 3 for Efficient fair PCA for fair representation learning
Figure 4 for Efficient fair PCA for fair representation learning
Viaarxiv icon

Fortuna: A Library for Uncertainty Quantification in Deep Learning

Add code
Bookmark button
Alert button
Feb 08, 2023
Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, Andrew Gordon Wilson, Cedric Archambeau

Figure 1 for Fortuna: A Library for Uncertainty Quantification in Deep Learning
Figure 2 for Fortuna: A Library for Uncertainty Quantification in Deep Learning
Viaarxiv icon

Diverse Counterfactual Explanations for Anomaly Detection in Time Series

Add code
Bookmark button
Alert button
Mar 21, 2022
Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau

Figure 1 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 2 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 3 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 4 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Viaarxiv icon

More Than Words: Towards Better Quality Interpretations of Text Classifiers

Add code
Bookmark button
Alert button
Dec 23, 2021
Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi

Figure 1 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 2 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 3 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 4 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Viaarxiv icon

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

Add code
Bookmark button
Alert button
Dec 13, 2021
David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi

Figure 1 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 2 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 3 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 4 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Viaarxiv icon

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Add code
Bookmark button
Alert button
Sep 07, 2021
Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi

Figure 1 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 2 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 3 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 4 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Viaarxiv icon

Multi-objective Asynchronous Successive Halving

Add code
Bookmark button
Alert button
Jun 23, 2021
Robin Schmucker, Michele Donini, Muhammad Bilal Zafar, David Salinas, Cédric Archambeau

Figure 1 for Multi-objective Asynchronous Successive Halving
Figure 2 for Multi-objective Asynchronous Successive Halving
Figure 3 for Multi-objective Asynchronous Successive Halving
Figure 4 for Multi-objective Asynchronous Successive Halving
Viaarxiv icon