Alert button
Picture for W. James Murdoch

W. James Murdoch

Alert button

Interpretations are useful: penalizing explanations to align neural networks with prior knowledge

Add code
Bookmark button
Alert button
Oct 01, 2019
Laura Rieger, Chandan Singh, W. James Murdoch, Bin Yu

Figure 1 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 2 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 3 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Figure 4 for Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Viaarxiv icon

Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees

Add code
Bookmark button
Alert button
May 18, 2019
Summer Devlin, Chandan Singh, W. James Murdoch, Bin Yu

Figure 1 for Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Figure 2 for Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Figure 3 for Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Figure 4 for Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees
Viaarxiv icon

Interpretable machine learning: definitions, methods, and applications

Add code
Bookmark button
Alert button
Jan 14, 2019
W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu

Figure 1 for Interpretable machine learning: definitions, methods, and applications
Figure 2 for Interpretable machine learning: definitions, methods, and applications
Figure 3 for Interpretable machine learning: definitions, methods, and applications
Viaarxiv icon

Hierarchical interpretations for neural network predictions

Add code
Bookmark button
Alert button
Jun 14, 2018
Chandan Singh, W. James Murdoch, Bin Yu

Figure 1 for Hierarchical interpretations for neural network predictions
Figure 2 for Hierarchical interpretations for neural network predictions
Figure 3 for Hierarchical interpretations for neural network predictions
Figure 4 for Hierarchical interpretations for neural network predictions
Viaarxiv icon

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

Add code
Bookmark button
Alert button
Apr 27, 2018
W. James Murdoch, Peter J. Liu, Bin Yu

Figure 1 for Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Figure 2 for Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Figure 3 for Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Figure 4 for Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
Viaarxiv icon

Automatic Rule Extraction from Long Short Term Memory Networks

Add code
Bookmark button
Alert button
Feb 24, 2017
W. James Murdoch, Arthur Szlam

Figure 1 for Automatic Rule Extraction from Long Short Term Memory Networks
Figure 2 for Automatic Rule Extraction from Long Short Term Memory Networks
Figure 3 for Automatic Rule Extraction from Long Short Term Memory Networks
Figure 4 for Automatic Rule Extraction from Long Short Term Memory Networks
Viaarxiv icon

Expanded Alternating Optimization of Nonconvex Functions with Applications to Matrix Factorization and Penalized Regression

Add code
Bookmark button
Alert button
Dec 12, 2014
W. James Murdoch, Mu Zhu

Figure 1 for Expanded Alternating Optimization of Nonconvex Functions with Applications to Matrix Factorization and Penalized Regression
Figure 2 for Expanded Alternating Optimization of Nonconvex Functions with Applications to Matrix Factorization and Penalized Regression
Figure 3 for Expanded Alternating Optimization of Nonconvex Functions with Applications to Matrix Factorization and Penalized Regression
Figure 4 for Expanded Alternating Optimization of Nonconvex Functions with Applications to Matrix Factorization and Penalized Regression
Viaarxiv icon