Picture for Gil Fidel

Gil Fidel

Improving Interpretability via Regularization of Neural Activation Sensitivity

Add code
Nov 16, 2022
Figure 1 for Improving Interpretability via Regularization of Neural Activation Sensitivity
Figure 2 for Improving Interpretability via Regularization of Neural Activation Sensitivity
Figure 3 for Improving Interpretability via Regularization of Neural Activation Sensitivity
Figure 4 for Improving Interpretability via Regularization of Neural Activation Sensitivity
Viaarxiv icon

Adversarial robustness via stochastic regularization of neural activation sensitivity

Add code
Sep 23, 2020
Figure 1 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 2 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 3 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Figure 4 for Adversarial robustness via stochastic regularization of neural activation sensitivity
Viaarxiv icon

When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures

Add code
Sep 08, 2019
Figure 1 for When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Figure 2 for When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Figure 3 for When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Figure 4 for When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Viaarxiv icon