Alert button
Picture for Finale Doshi-Velez

Finale Doshi-Velez

Alert button

Evaluating Reinforcement Learning Algorithms in Observational Health Settings

May 31, 2018
Omer Gottesman, Fredrik Johansson, Joshua Meier, Jack Dent, Donghun Lee, Srivatsan Srinivasan, Linying Zhang, Yi Ding, David Wihl, Xuefeng Peng, Jiayu Yao, Isaac Lage, Christopher Mosch, Li-wei H. Lehman, Matthieu Komorowski, Matthieu Komorowski, Aldo Faisal, Leo Anthony Celi, David Sontag, Finale Doshi-Velez

Figure 1 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 2 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 3 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Figure 4 for Evaluating Reinforcement Learning Algorithms in Observational Health Settings
Viaarxiv icon

A particle-based variational approach to Bayesian Non-negative Matrix Factorization

Mar 16, 2018
M. Arjumand Masood, Finale Doshi-Velez

Figure 1 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 2 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 3 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Figure 4 for A particle-based variational approach to Bayesian Non-negative Matrix Factorization
Viaarxiv icon

Unsupervised Grammar Induction with Depth-bounded PCFG

Feb 26, 2018
Lifeng Jin, Finale Doshi-Velez, Timothy Miller, William Schuler, Lane Schwartz

Viaarxiv icon

How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation

Feb 02, 2018
Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman, Finale Doshi-Velez

Figure 1 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 2 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 3 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Figure 4 for How do Humans Understand Explanations from Machine Learning Systems? An Evaluation of the Human-Interpretability of Explanation
Viaarxiv icon

Prediction-Constrained Topic Models for Antidepressant Recommendation

Dec 01, 2017
Michael C. Hughes, Gabriel Hope, Leah Weiner, Thomas H. McCoy, Roy H. Perlis, Erik B. Sudderth, Finale Doshi-Velez

Figure 1 for Prediction-Constrained Topic Models for Antidepressant Recommendation
Figure 2 for Prediction-Constrained Topic Models for Antidepressant Recommendation
Viaarxiv icon

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients

Nov 26, 2017
Andrew Slavin Ross, Finale Doshi-Velez

Figure 1 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 2 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 3 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Figure 4 for Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
Viaarxiv icon

Accountability of AI Under the Law: The Role of Explanation

Nov 21, 2017
Finale Doshi-Velez, Mason Kortz, Ryan Budish, Chris Bavitz, Sam Gershman, David O'Brien, Stuart Schieber, James Waldo, David Weinberger, Alexandra Wood

Figure 1 for Accountability of AI Under the Law: The Role of Explanation
Figure 2 for Accountability of AI Under the Law: The Role of Explanation
Figure 3 for Accountability of AI Under the Law: The Role of Explanation
Viaarxiv icon

Beyond Sparsity: Tree Regularization of Deep Models for Interpretability

Nov 16, 2017
Mike Wu, Michael C. Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, Finale Doshi-Velez

Figure 1 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 2 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 3 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Figure 4 for Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Viaarxiv icon

Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

Nov 11, 2017
Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft

Figure 1 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 2 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 3 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Figure 4 for Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables
Viaarxiv icon

Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes

Oct 31, 2017
Taylor Killian, Samuel Daulton, George Konidaris, Finale Doshi-Velez

Figure 1 for Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
Figure 2 for Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
Figure 3 for Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
Figure 4 for Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
Viaarxiv icon