Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for David Wagner

Learning Security Classifiers with Verified Global Robustness Properties


May 24, 2021
Yizheng Chen, Shiqi Wang, Yue Qin, Xiaojing Liao, Suman Jana, David Wagner


  Access Paper or Ask Questions

Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks


May 18, 2021
Dequan Wang, An Ju, Evan Shelhamer, David Wagner, Trevor Darrell


  Access Paper or Ask Questions

Model-Agnostic Defense for Lane Detection against Adversarial Attack


Mar 01, 2021
Henry Xu, An Ju, David Wagner

* 6 pages, 6 figures, 3 tables. Part of AutoSec 2021 proceedings 

  Access Paper or Ask Questions

Adversarial Examples for $k$-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams


Nov 19, 2020
Chawin Sitawarin, Evgenios M. Kornaropoulos, Dawn Song, David Wagner


  Access Paper or Ask Questions

Minority Reports Defense: Defending Against Adversarial Patches


Apr 28, 2020
Michael McCoyd, Won Park, Steven Chen, Neil Shah, Ryan Roggenkemper, Minjune Hwang, Jason Xinyu Liu, David Wagner

* 9 pages, 5 figures 

  Access Paper or Ask Questions

Improving Adversarial Robustness Through Progressive Hardening


Mar 18, 2020
Chawin Sitawarin, Supriyo Chakraborty, David Wagner

* Preprint. Under review 

  Access Paper or Ask Questions

Minimum-Norm Adversarial Examples on KNN and KNN-Based Models


Mar 14, 2020
Chawin Sitawarin, David Wagner

* 3rd Deep Learning and Security Workshop (co-located with the 41st IEEE Symposium on Security and Privacy) 

  Access Paper or Ask Questions

Stateful Detection of Black-Box Adversarial Attacks


Jul 12, 2019
Steven Chen, Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Defending Against Adversarial Examples with K-Nearest Neighbor


Jun 23, 2019
Chawin Sitawarin, David Wagner

* Preprint 

  Access Paper or Ask Questions

On the Robustness of Deep K-Nearest Neighbors


Mar 20, 2019
Chawin Sitawarin, David Wagner

* Published at Deep Learning and Security Workshop 2019 (IEEE S&P) 

  Access Paper or Ask Questions

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples


Jul 31, 2018
Anish Athalye, Nicholas Carlini, David Wagner

* ICML 2018. Source code at https://github.com/anishathalye/obfuscated-gradients 

  Access Paper or Ask Questions

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text


Mar 30, 2018
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples


Nov 22, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods


Nov 01, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Towards Evaluating the Robustness of Neural Networks


Mar 22, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Spoofing 2D Face Detection: Machines See People Who Aren't There


Aug 06, 2016
Michael McCoyd, David Wagner

* 9 pages, 19 figures, submitted to AISec 

  Access Paper or Ask Questions

Defensive Distillation is Not Robust to Adversarial Examples


Jul 14, 2016
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions