Picture for Carlos Scheidegger

Carlos Scheidegger

Persistent Classification: A New Approach to Stability of Data and Adversarial Examples

Apr 11, 2024
Viaarxiv icon

UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data

Nov 02, 2021
Figure 1 for UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
Figure 2 for UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
Figure 3 for UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
Figure 4 for UnProjection: Leveraging Inverse-Projections for Visual Analytics of High-Dimensional Data
Viaarxiv icon

Comparing Deep Neural Nets with UMAP Tour

Add code
Oct 18, 2021
Figure 1 for Comparing Deep Neural Nets with UMAP Tour
Figure 2 for Comparing Deep Neural Nets with UMAP Tour
Figure 3 for Comparing Deep Neural Nets with UMAP Tour
Figure 4 for Comparing Deep Neural Nets with UMAP Tour
Viaarxiv icon

Problems with Shapley-value-based explanations as feature importance measures

Feb 25, 2020
Figure 1 for Problems with Shapley-value-based explanations as feature importance measures
Figure 2 for Problems with Shapley-value-based explanations as feature importance measures
Viaarxiv icon

Disentangling Influence: Using Disentangled Representations to Audit Model Predictions

Add code
Jun 20, 2019
Figure 1 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 2 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 3 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Figure 4 for Disentangling Influence: Using Disentangled Representations to Audit Model Predictions
Viaarxiv icon

Assessing the Local Interpretability of Machine Learning Models

Feb 09, 2019
Figure 1 for Assessing the Local Interpretability of Machine Learning Models
Figure 2 for Assessing the Local Interpretability of Machine Learning Models
Figure 3 for Assessing the Local Interpretability of Machine Learning Models
Figure 4 for Assessing the Local Interpretability of Machine Learning Models
Viaarxiv icon

Fairness in representation: quantifying stereotyping as a representational harm

Jan 28, 2019
Figure 1 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 2 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 3 for Fairness in representation: quantifying stereotyping as a representational harm
Figure 4 for Fairness in representation: quantifying stereotyping as a representational harm
Viaarxiv icon

A comparative study of fairness-enhancing interventions in machine learning

Add code
Feb 13, 2018
Figure 1 for A comparative study of fairness-enhancing interventions in machine learning
Figure 2 for A comparative study of fairness-enhancing interventions in machine learning
Figure 3 for A comparative study of fairness-enhancing interventions in machine learning
Figure 4 for A comparative study of fairness-enhancing interventions in machine learning
Viaarxiv icon

Runaway Feedback Loops in Predictive Policing

Add code
Dec 22, 2017
Figure 1 for Runaway Feedback Loops in Predictive Policing
Figure 2 for Runaway Feedback Loops in Predictive Policing
Viaarxiv icon

Auditing Black-box Models for Indirect Influence

Add code
Nov 30, 2016
Figure 1 for Auditing Black-box Models for Indirect Influence
Figure 2 for Auditing Black-box Models for Indirect Influence
Figure 3 for Auditing Black-box Models for Indirect Influence
Figure 4 for Auditing Black-box Models for Indirect Influence
Viaarxiv icon