Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Dec 30, 2020
Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein


  Access Paper or Ask Questions

Unadversarial Examples: Designing Objects for Robust Vision

Dec 22, 2020
Hadi Salman, Andrew Ilyas, Logan Engstrom, Sai Vemprala, Aleksander Madry, Ashish Kapoor


  Access Paper or Ask Questions

Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Dec 18, 2020
Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein


  Access Paper or Ask Questions

BREEDS: Benchmarks for Subpopulation Shift

Aug 11, 2020
Shibani Santurkar, Dimitris Tsipras, Aleksander Madry


  Access Paper or Ask Questions

Do Adversarially Robust ImageNet Models Transfer Better?

Jul 16, 2020
Hadi Salman, Andrew Ilyas, Logan Engstrom, Ashish Kapoor, Aleksander Madry


  Access Paper or Ask Questions

Noise or Signal: The Role of Image Backgrounds in Object Recognition

Jun 17, 2020
Kai Xiao, Logan Engstrom, Andrew Ilyas, Aleksander Madry


  Access Paper or Ask Questions

Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO

May 25, 2020
Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry

* ICLR 2020 version. arXiv admin note: text overlap with arXiv:1811.02553 

  Access Paper or Ask Questions

From ImageNet to Image Classification: Contextualizing Progress on Benchmarks

May 22, 2020
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Andrew Ilyas, Aleksander Madry


  Access Paper or Ask Questions

Identifying Statistical Bias in Dataset Replication

May 19, 2020
Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Jacob Steinhardt, Aleksander Madry


  Access Paper or Ask Questions

The Two Regimes of Deep Network Training

Feb 24, 2020
Guillaume Leclerc, Aleksander Madry

* 14 pages (5 of appendix), 14 figures 

  Access Paper or Ask Questions

On Adaptive Attacks to Adversarial Example Defenses

Feb 19, 2020
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry


  Access Paper or Ask Questions

Label-Consistent Backdoor Attacks

Dec 06, 2019
Alexander Turner, Dimitris Tsipras, Aleksander Madry


  Access Paper or Ask Questions

Computer Vision with a Single (Robust) Classifier

Jun 06, 2019
Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Andrew Ilyas, Logan Engstrom, Aleksander Madry


  Access Paper or Ask Questions

Learning Perceptually-Aligned Representations via Adversarial Robustness

Jun 03, 2019
Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, Aleksander Madry


  Access Paper or Ask Questions

Adversarial Examples Are Not Bugs, They Are Features

May 07, 2019
Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Logan Engstrom, Brandon Tran, Aleksander Madry


  Access Paper or Ask Questions

On Evaluating Adversarial Robustness

Feb 20, 2019
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, Alexey Kurakin

* Living document; source available at https://github.com/evaluating-adversarial-robustness/adv-eval-paper/ 

  Access Paper or Ask Questions

Are Deep Policy Gradient Algorithms Truly Policy Gradient Algorithms?

Dec 02, 2018
Andrew Ilyas, Logan Engstrom, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry


  Access Paper or Ask Questions

Spectral Signatures in Backdoor Attacks

Nov 01, 2018
Brandon Tran, Jerry Li, Aleksander Madry

* 16 pages, accepted to NIPS 2018 

  Access Paper or Ask Questions

How Does Batch Normalization Help Optimization?

Oct 27, 2018
Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, Aleksander Madry

* To appear in NIPS'18 

  Access Paper or Ask Questions

Robustness May Be at Odds with Accuracy

Oct 11, 2018
Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, Aleksander Madry


  Access Paper or Ask Questions

Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability

Sep 26, 2018
Kai Y. Xiao, Vincent Tjeng, Nur Muhammad Shafiullah, Aleksander Madry


  Access Paper or Ask Questions

Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors

Jul 20, 2018
Andrew Ilyas, Logan Engstrom, Aleksander Madry


  Access Paper or Ask Questions

On the Limitations of First-Order Approximation in GAN Dynamics

Jun 03, 2018
Jerry Li, Aleksander Madry, John Peebles, Ludwig Schmidt

* 18 pages, 4 figures, accepted to ICML 2018 

  Access Paper or Ask Questions

A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations

Feb 13, 2018
Logan Engstrom, Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, Aleksander Madry

* Preliminary version appeared in the NIPS 2017 Workshop on Machine Learning and Computer Security 

  Access Paper or Ask Questions

Towards Deep Learning Models Resistant to Adversarial Attacks

Nov 09, 2017
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu


  Access Paper or Ask Questions

Runtime Guarantees for Regression Problems

Sep 07, 2012
Hui Han Chin, Aleksander Madry, Gary Miller, Richard Peng


  Access Paper or Ask Questions