Picture for Finale Doshi-Velez

Finale Doshi-Velez

Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning

Add code
Jun 15, 2018
Figure 1 for Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Figure 2 for Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Figure 3 for Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Figure 4 for Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning
Viaarxiv icon

Evaluating Reinforcement Learning Algorithms in Observational Health Settings

Add code
May 31, 2018
Figure 1 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 2 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 3 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 4 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Viaarxiv icon

A particle-based variational approach to Bayesian Non-negative Matrix Factorization

Add code
Mar 16, 2018
Figure 1 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 2 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 3 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 4 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Viaarxiv icon

Unsupervised Grammar Induction with Depth-bounded PCFG

Add code
Feb 26, 2018
Viaarxiv icon

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Add code
Feb 02, 2018
Figure 1 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 2 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 3 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 4 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Viaarxiv icon

Prediction-Constrained Topic Models for Antidepressant Recommendation

Add code
Dec 01, 2017
Figure 1 for Prediction-Constrained Topic Models for Antidepressant Recommendation
Figure 2 for Prediction-Constrained Topic Models for Antidepressant Recommendation
Viaarxiv icon

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Add code
Nov 26, 2017
Figure 1 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 2 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 3 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 4 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Viaarxiv icon

Accountability of AI Under the Law: The Role of Explanation

Add code
Nov 21, 2017
Figure 1 for Accountability of AI Under the Law: The Role of Explanation
Figure 2 for Accountability of AI Under the Law: The Role of Explanation
Figure 3 for Accountability of AI Under the Law: The Role of Explanation
Viaarxiv icon

Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

Add code
Nov 16, 2017
Figure 1 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 2 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 3 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 4 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Viaarxiv icon

Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

Add code
Nov 11, 2017
Figure 1 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 2 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 3 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 4 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Viaarxiv icon