Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Label-Only Membership Inference Attacks

Jul 28, 2020
Christopher A. Choquette Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot

* 16 pages, 11 figures, 2 tables 

  Access Paper or Ask Questions

Measuring Robustness to Natural Distribution Shifts in Image Classification

Jul 01, 2020
Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt


  Access Paper or Ask Questions

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

Apr 01, 2020
Nicholas Carlini, Hany Farid


  Access Paper or Ask Questions

Cryptanalytic Extraction of Neural Network Models

Mar 10, 2020
Nicholas Carlini, Matthew Jagielski, Ilya Mironov


  Access Paper or Ask Questions

On Adaptive Attacks to Adversarial Example Defenses

Feb 19, 2020
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry


  Access Paper or Ask Questions

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

Feb 11, 2020
Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen

* Supersedes the workshop paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness" (arXiv:1903.10484) 

  Access Paper or Ask Questions

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

Jan 21, 2020
Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel


  Access Paper or Ask Questions

ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring

Nov 21, 2019
David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel


  Access Paper or Ask Questions

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications

Oct 29, 2019
Nicholas Carlini, √ölfar Erlingsson, Nicolas Papernot


  Access Paper or Ask Questions

High-Fidelity Extraction of Neural Network Models

Sep 03, 2019
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot


  Access Paper or Ask Questions

Stateful Detection of Black-Box Adversarial Attacks

Jul 12, 2019
Steven Chen, Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

A critique of the DeepSec Platform for Security Analysis of Deep Learning Models

May 17, 2019
Nicholas Carlini


  Access Paper or Ask Questions

MixMatch: A Holistic Approach to Semi-Supervised Learning

May 06, 2019
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel


  Access Paper or Ask Questions

SysML: The New Frontier of Machine Learning Systems

May 01, 2019
Alexander Ratner, Dan Alistarh, Gustavo Alonso, David G. Andersen, Peter Bailis, Sarah Bird, Nicholas Carlini, Bryan Catanzaro, Jennifer Chayes, Eric Chung, Bill Dally, Jeff Dean, Inderjit S. Dhillon, Alexandros Dimakis, Pradeep Dubey, Charles Elkan, Grigori Fursin, Gregory R. Ganger, Lise Getoor, Phillip B. Gibbons, Garth A. Gibson, Joseph E. Gonzalez, Justin Gottschlich, Song Han, Kim Hazelwood, Furong Huang, Martin Jaggi, Kevin Jamieson, Michael I. Jordan, Gauri Joshi, Rania Khalaf, Jason Knight, Jakub Koneńćn√Ĺ, Tim Kraska, Arun Kumar, Anastasios Kyrillidis, Aparna Lakshmiratan, Jing Li, Samuel Madden, H. Brendan McMahan, Erik Meijer, Ioannis Mitliagkas, Rajat Monga, Derek Murray, Kunle Olukotun, Dimitris Papailiopoulos, Gennady Pekhimenko, Theodoros Rekatsinas, Afshin Rostamizadeh, Christopher R√©, Christopher De Sa, Hanie Sedghi, Siddhartha Sen, Virginia Smith, Alex Smola, Dawn Song, Evan Sparks, Ion Stoica, Vivienne Sze, Madeleine Udell, Joaquin Vanschoren, Shivaram Venkataraman, Rashmi Vinayak, Markus Weimer, Andrew Gordon Wilson, Eric Xing, Matei Zaharia, Ce Zhang, Ameet Talwalkar


  Access Paper or Ask Questions

Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness

Mar 25, 2019
Jörn-Henrik Jacobsen, Jens Behrmannn, Nicholas Carlini, Florian Tramèr, Nicolas Papernot

* Accepted at the ICLR 2019 SafeML Workshop 

  Access Paper or Ask Questions

Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition

Mar 22, 2019
Yao Qin, Nicholas Carlini, Ian Goodfellow, Garrison Cottrell, Colin Raffel


  Access Paper or Ask Questions

On Evaluating Adversarial Robustness

Feb 20, 2019
Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, Alexey Kurakin

* Living document; source available at https://github.com/evaluating-adversarial-robustness/adv-eval-paper/ 

  Access Paper or Ask Questions

Is AmI (Attacks Meet Interpretability) Robust to Adversarial Examples?

Feb 06, 2019
Nicholas Carlini


  Access Paper or Ask Questions

Unrestricted Adversarial Examples

Sep 22, 2018
Tom B. Brown, Nicholas Carlini, Chiyuan Zhang, Catherine Olsson, Paul Christiano, Ian Goodfellow


  Access Paper or Ask Questions

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Jul 31, 2018
Anish Athalye, Nicholas Carlini, David Wagner

* ICML 2018. Source code at https://github.com/anishathalye/obfuscated-gradients 

  Access Paper or Ask Questions

Technical Report on the CleverHans v2.1.0 Adversarial Examples Library

Jun 27, 2018
Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie, Yash Sharma, Tom Brown, Aurko Roy, Alexander Matyasko, Vahid Behzadan, Karen Hambardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, Rujun Long, Patrick McDaniel

* Technical report for https://github.com/tensorflow/cleverhans 

  Access Paper or Ask Questions

On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses

Apr 10, 2018
Anish Athalye, Nicholas Carlini


  Access Paper or Ask Questions

Audio Adversarial Examples: Targeted Attacks on Speech-to-Text

Mar 30, 2018
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets

Feb 22, 2018
Nicholas Carlini, Chang Liu, Jernej Kos, √ölfar Erlingsson, Dawn Song


  Access Paper or Ask Questions

Provably Minimally-Distorted Adversarial Examples

Feb 20, 2018
Nicholas Carlini, Guy Katz, Clark Barrett, David L. Dill


  Access Paper or Ask Questions

MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples

Nov 22, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods

Nov 01, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions

Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong

Jun 15, 2017
Warren He, James Wei, Xinyun Chen, Nicholas Carlini, Dawn Song


  Access Paper or Ask Questions

Towards Evaluating the Robustness of Neural Networks

Mar 22, 2017
Nicholas Carlini, David Wagner


  Access Paper or Ask Questions