Picture for Dylan Slack

Dylan Slack

Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy

Add code
Feb 11, 2021
Figure 1 for Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
Figure 2 for Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
Figure 3 for Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
Figure 4 for Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy
Viaarxiv icon

Differentially Private Language Models Benefit from Public Pre-training

Add code
Sep 13, 2020
Figure 1 for Differentially Private Language Models Benefit from Public Pre-training
Figure 2 for Differentially Private Language Models Benefit from Public Pre-training
Figure 3 for Differentially Private Language Models Benefit from Public Pre-training
Viaarxiv icon

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations

Add code
Aug 11, 2020
Figure 1 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 2 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 3 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Figure 4 for How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations
Viaarxiv icon

Fair Meta-Learning: Learning How to Learn Fairly

Add code
Nov 06, 2019
Figure 1 for Fair Meta-Learning: Learning How to Learn Fairly
Figure 2 for Fair Meta-Learning: Learning How to Learn Fairly
Figure 3 for Fair Meta-Learning: Learning How to Learn Fairly
Viaarxiv icon

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods

Add code
Nov 06, 2019
Figure 1 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 2 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 3 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Figure 4 for How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods
Viaarxiv icon

Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data

Add code
Aug 24, 2019
Figure 1 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 2 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 3 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Figure 4 for Fairness Warnings and Fair-MAML: Learning Fairly with Minimal Data
Viaarxiv icon

Assessing the Local Interpretability of Machine Learning Models

Add code
Feb 09, 2019
Figure 1 for Assessing the Local Interpretability of Machine Learning Models
Figure 2 for Assessing the Local Interpretability of Machine Learning Models
Figure 3 for Assessing the Local Interpretability of Machine Learning Models
Figure 4 for Assessing the Local Interpretability of Machine Learning Models
Viaarxiv icon