Alert button
Picture for Dylan Slack

Dylan Slack

Alert button

Differentially Private Language Models Benefit from Public Pre-training

Add code
Bookmark button
Alert button
Sep 13, 2020
Gavin Kerrigan, Dylan Slack, Jens Tuyls

Figure 1 for Differentially Private Language Models Benefit from Public Pre-training
Figure 2 for Differentially Private Language Models Benefit from Public Pre-training
Figure 3 for Differentially Private Language Models Benefit from Public Pre-training
Viaarxiv icon

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations

Add code
Bookmark button
Alert button
Aug 11, 2020
Dylan Slack, Sophie Hilgard, Sameer Singh, Himabindu Lakkaraju

Figure 1 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 2 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 3 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 4 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Viaarxiv icon

Fair Meta-Learning: Learning How to Learn Fairly

Add code
Bookmark button
Alert button
Nov 06, 2019
Dylan Slack, Sorelle Friedler, Emile Givental

Figure 1 for Fair Meta-Learning: Learning How to Learn Fairly
Figure 2 for Fair Meta-Learning: Learning How to Learn Fairly
Figure 3 for Fair Meta-Learning: Learning How to Learn Fairly
Viaarxiv icon

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods

Add code
Bookmark button
Alert button
Nov 06, 2019
Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju

Figure 1 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 2 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 3 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 4 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Viaarxiv icon

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

Add code
Bookmark button
Alert button
Aug 24, 2019
Dylan Slack, Sorelle Friedler, Emile Givental

Figure 1 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 2 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 3 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 4 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Viaarxiv icon

Assessing the Local Interpretability of Machine Learning Models

Add code
Bookmark button
Alert button
Feb 09, 2019
Sorelle A. Friedler, Chitradeep Dutta Roy, Carlos Scheidegger, Dylan Slack

Figure 1 for Assessing the Local Interpretability of Machine Learning Models
Figure 2 for Assessing the Local Interpretability of Machine Learning Models
Figure 3 for Assessing the Local Interpretability of Machine Learning Models
Figure 4 for Assessing the Local Interpretability of Machine Learning Models
Viaarxiv icon