Picture for David Wagner

David Wagner

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models

Add code
Mar 14, 2020
Figure 1 for Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Figure 2 for Minimum-Norm Adversarial Examples on KNN and KNN-Based Models
Viaarxiv icon

Stateful Detection of Black-Box Adversarial Attacks

Add code
Jul 12, 2019
Figure 1 for Stateful Detection of Black-Box Adversarial Attacks
Figure 2 for Stateful Detection of Black-Box Adversarial Attacks
Figure 3 for Stateful Detection of Black-Box Adversarial Attacks
Figure 4 for Stateful Detection of Black-Box Adversarial Attacks
Viaarxiv icon

Defending Against Adversarial Examples with K-Nearest Neighbor

Add code
Jun 23, 2019
Figure 1 for Defending Against Adversarial Examples with K-Nearest Neighbor
Figure 2 for Defending Against Adversarial Examples with K-Nearest Neighbor
Figure 3 for Defending Against Adversarial Examples with K-Nearest Neighbor
Figure 4 for Defending Against Adversarial Examples with K-Nearest Neighbor
Viaarxiv icon

On the Robustness of Deep K-Nearest Neighbors

Add code
Mar 20, 2019
Figure 1 for On the Robustness of Deep K-Nearest Neighbors
Figure 2 for On the Robustness of Deep K-Nearest Neighbors
Figure 3 for On the Robustness of Deep K-Nearest Neighbors
Figure 4 for On the Robustness of Deep K-Nearest Neighbors
Viaarxiv icon

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Add code
Jul 31, 2018
Figure 1 for Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Figure 2 for Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Figure 3 for Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Viaarxiv icon

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text

Add code
Mar 30, 2018
Figure 1 for Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Figure 2 for Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Figure 3 for Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Viaarxiv icon

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

Add code
Nov 22, 2017
Figure 1 for MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Figure 2 for MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Figure 3 for MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Figure 4 for MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples
Viaarxiv icon

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

Add code
Nov 01, 2017
Figure 1 for Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Figure 2 for Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Figure 3 for Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Viaarxiv icon

Towards Evaluating the Robustness of Neural Networks

Add code
Mar 22, 2017
Figure 1 for Towards Evaluating the Robustness of Neural Networks
Figure 2 for Towards Evaluating the Robustness of Neural Networks
Figure 3 for Towards Evaluating the Robustness of Neural Networks
Figure 4 for Towards Evaluating the Robustness of Neural Networks
Viaarxiv icon

Spoofing 2D Face Detection: Machines See People Who Aren't There

Add code
Aug 06, 2016
Figure 1 for Spoofing 2D Face Detection: Machines See People Who Aren't There
Figure 2 for Spoofing 2D Face Detection: Machines See People Who Aren't There
Figure 3 for Spoofing 2D Face Detection: Machines See People Who Aren't There
Figure 4 for Spoofing 2D Face Detection: Machines See People Who Aren't There
Viaarxiv icon