Alert button
Picture for Sarah Tan

Sarah Tan

Alert button

Error Discovery by Clustering Influence Embeddings

Add code
Bookmark button
Alert button
Dec 07, 2023
Fulton Wang, Julius Adebayo, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan

Viaarxiv icon

Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?

Add code
Bookmark button
Alert button
Apr 23, 2023
Zhi Chen, Sarah Tan, Urszula Chajewska, Cynthia Rudin, Rich Caruana

Figure 1 for Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?
Figure 2 for Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?
Figure 3 for Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?
Figure 4 for Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help?
Viaarxiv icon

Practical Policy Optimization with Personalized Experimentation

Add code
Bookmark button
Alert button
Mar 30, 2023
Mia Garrard, Hanson Wang, Ben Letham, Shaun Singh, Abbas Kazerouni, Sarah Tan, Zehui Wang, Yin Huang, Yichun Hu, Chad Zhou, Norm Zhou, Eytan Bakshy

Figure 1 for Practical Policy Optimization with Personalized Experimentation
Figure 2 for Practical Policy Optimization with Personalized Experimentation
Viaarxiv icon

Efficient Heterogeneous Treatment Effect Estimation With Multiple Experiments and Multiple Outcomes

Add code
Bookmark button
Alert button
Jun 10, 2022
Leon Yao, Caroline Lo, Israel Nir, Sarah Tan, Ariel Evnine, Adam Lerer, Alex Peysakhovich

Figure 1 for Efficient Heterogeneous Treatment Effect Estimation With Multiple Experiments and Multiple Outcomes
Figure 2 for Efficient Heterogeneous Treatment Effect Estimation With Multiple Experiments and Multiple Outcomes
Figure 3 for Efficient Heterogeneous Treatment Effect Estimation With Multiple Experiments and Multiple Outcomes
Figure 4 for Efficient Heterogeneous Treatment Effect Estimation With Multiple Experiments and Multiple Outcomes
Viaarxiv icon

Distilling Heterogeneity: From Explanations of Heterogeneous Treatment Effect Models to Interpretable Policies

Add code
Bookmark button
Alert button
Nov 05, 2021
Han Wu, Sarah Tan, Weiwei Li, Mia Garrard, Adam Obeng, Drew Dimmery, Shaun Singh, Hanson Wang, Daniel Jiang, Eytan Bakshy

Figure 1 for Distilling Heterogeneity: From Explanations of Heterogeneous Treatment Effect Models to Interpretable Policies
Figure 2 for Distilling Heterogeneity: From Explanations of Heterogeneous Treatment Effect Models to Interpretable Policies
Figure 3 for Distilling Heterogeneity: From Explanations of Heterogeneous Treatment Effect Models to Interpretable Policies
Figure 4 for Distilling Heterogeneity: From Explanations of Heterogeneous Treatment Effect Models to Interpretable Policies
Viaarxiv icon

How Interpretable and Trustworthy are GAMs?

Add code
Bookmark button
Alert button
Jun 11, 2020
Chun-Hao Chang, Sarah Tan, Ben Lengerich, Anna Goldenberg, Rich Caruana

Figure 1 for How Interpretable and Trustworthy are GAMs?
Figure 2 for How Interpretable and Trustworthy are GAMs?
Figure 3 for How Interpretable and Trustworthy are GAMs?
Figure 4 for How Interpretable and Trustworthy are GAMs?
Viaarxiv icon

Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models

Add code
Bookmark button
Alert button
Nov 12, 2019
Benjamin Lengerich, Sarah Tan, Chun-Hao Chang, Giles Hooker, Rich Caruana

Figure 1 for Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Figure 2 for Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Figure 3 for Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Figure 4 for Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models
Viaarxiv icon

"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations

Add code
Bookmark button
Alert button
Jun 04, 2019
Yujia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, Madeleine Udell

Figure 1 for "Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Figure 2 for "Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Figure 3 for "Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Figure 4 for "Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations
Viaarxiv icon

Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models

Add code
Bookmark button
Alert button
Oct 22, 2018
Xuezhou Zhang, Sarah Tan, Paul Koch, Yin Lou, Urszula Chajewska, Rich Caruana

Figure 1 for Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models
Figure 2 for Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models
Figure 3 for Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models
Figure 4 for Interpretability is Harder in the Multiclass Setting: Axiomatic Interpretability for Multiclass Additive Models
Viaarxiv icon

Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation

Add code
Bookmark button
Alert button
Oct 11, 2018
Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou

Figure 1 for Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Figure 2 for Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Figure 3 for Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Figure 4 for Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation
Viaarxiv icon