Alert button
Picture for Andrew Slavin Ross

Andrew Slavin Ross

Alert button

Learning Predictive and Interpretable Timeseries Summaries from ICU Data

Add code
Bookmark button
Alert button
Sep 22, 2021
Nari Johnson, Sonali Parbhoo, Andrew Slavin Ross, Finale Doshi-Velez

Figure 1 for Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Figure 2 for Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Figure 3 for Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Figure 4 for Learning Predictive and Interpretable Timeseries Summaries from ICU Data
Viaarxiv icon

Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement

Add code
Bookmark button
Alert button
Feb 09, 2021
Andrew Slavin Ross, Finale Doshi-Velez

Figure 1 for Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
Figure 2 for Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
Figure 3 for Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
Figure 4 for Benchmarks, Algorithms, and Metrics for Hierarchical Disentanglement
Viaarxiv icon

Evaluating the Interpretability of Generative Models by Interactive Reconstruction

Add code
Bookmark button
Alert button
Feb 02, 2021
Andrew Slavin Ross, Nina Chen, Elisa Zhao Hang, Elena L. Glassman, Finale Doshi-Velez

Figure 1 for Evaluating the Interpretability of Generative Models by Interactive Reconstruction
Figure 2 for Evaluating the Interpretability of Generative Models by Interactive Reconstruction
Figure 3 for Evaluating the Interpretability of Generative Models by Interactive Reconstruction
Figure 4 for Evaluating the Interpretability of Generative Models by Interactive Reconstruction
Viaarxiv icon

Ensembles of Locally Independent Prediction Models

Add code
Bookmark button
Alert button
Nov 27, 2019
Andrew Slavin Ross, Weiwei Pan, Leo Anthony Celi, Finale Doshi-Velez

Figure 1 for Ensembles of Locally Independent Prediction Models
Figure 2 for Ensembles of Locally Independent Prediction Models
Figure 3 for Ensembles of Locally Independent Prediction Models
Figure 4 for Ensembles of Locally Independent Prediction Models
Viaarxiv icon

Tackling Climate Change with Machine Learning

Add code
Bookmark button
Alert button
Jun 10, 2019
David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio

Figure 1 for Tackling Climate Change with Machine Learning
Viaarxiv icon

Human-in-the-Loop Interpretability Prior

Add code
Bookmark button
Alert button
Oct 30, 2018
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez

Figure 1 for Human-in-the-Loop Interpretability Prior
Figure 2 for Human-in-the-Loop Interpretability Prior
Figure 3 for Human-in-the-Loop Interpretability Prior
Figure 4 for Human-in-the-Loop Interpretability Prior
Viaarxiv icon

Training Machine Learning Models by Regularizing their Explanations

Add code
Bookmark button
Alert button
Sep 29, 2018
Andrew Slavin Ross

Figure 1 for Training Machine Learning Models by Regularizing their Explanations
Figure 2 for Training Machine Learning Models by Regularizing their Explanations
Figure 3 for Training Machine Learning Models by Regularizing their Explanations
Figure 4 for Training Machine Learning Models by Regularizing their Explanations
Viaarxiv icon

Learning Qualitatively Diverse and Interpretable Rules for Classification

Add code
Bookmark button
Alert button
Jul 19, 2018
Andrew Slavin Ross, Weiwei Pan, Finale Doshi-Velez

Figure 1 for Learning Qualitatively Diverse and Interpretable Rules for Classification
Figure 2 for Learning Qualitatively Diverse and Interpretable Rules for Classification
Figure 3 for Learning Qualitatively Diverse and Interpretable Rules for Classification
Figure 4 for Learning Qualitatively Diverse and Interpretable Rules for Classification
Viaarxiv icon

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Add code
Bookmark button
Alert button
Nov 26, 2017
Andrew Slavin Ross, Finale Doshi-Velez

Figure 1 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 2 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 3 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 4 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Viaarxiv icon