Alert button
Picture for Christopher Burger

Christopher Burger

Alert button

Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation

Add code
Bookmark button
Alert button
May 21, 2023
Christopher Burger, Lingwei Chen, Thai Le

Figure 1 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 2 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 3 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Figure 4 for Are Your Explanations Reliable? Investigating the Stability of LIME in Explaining Textual Classification Models via Adversarial Perturbation
Viaarxiv icon