Alert button
Picture for Nicholas Carlini

Nicholas Carlini

Alert button

Evading Deepfake-Image Detectors with White- and Black-Box Attacks

Add code
Bookmark button
Alert button
Apr 01, 2020
Nicholas Carlini, Hany Farid

Figure 1 for Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Figure 2 for Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Figure 3 for Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Figure 4 for Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Viaarxiv icon

Cryptanalytic Extraction of Neural Network Models

Add code
Bookmark button
Alert button
Mar 10, 2020
Nicholas Carlini, Matthew Jagielski, Ilya Mironov

Figure 1 for Cryptanalytic Extraction of Neural Network Models
Figure 2 for Cryptanalytic Extraction of Neural Network Models
Figure 3 for Cryptanalytic Extraction of Neural Network Models
Figure 4 for Cryptanalytic Extraction of Neural Network Models
Viaarxiv icon

On Adaptive Attacks to Adversarial Example Defenses

Add code
Bookmark button
Alert button
Feb 19, 2020
Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Figure 1 for On Adaptive Attacks to Adversarial Example Defenses
Figure 2 for On Adaptive Attacks to Adversarial Example Defenses
Viaarxiv icon

Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations

Add code
Bookmark button
Alert button
Feb 11, 2020
Florian Tramèr, Jens Behrmann, Nicholas Carlini, Nicolas Papernot, Jörn-Henrik Jacobsen

Figure 1 for Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Figure 2 for Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Figure 3 for Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Figure 4 for Fundamental Tradeoffs between Invariance and Sensitivity to Adversarial Perturbations
Viaarxiv icon

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

Add code
Bookmark button
Alert button
Jan 21, 2020
Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel

Figure 1 for FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Figure 2 for FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Figure 3 for FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Figure 4 for FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
Viaarxiv icon

ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring

Add code
Bookmark button
Alert button
Nov 21, 2019
David Berthelot, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel

Figure 1 for ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
Figure 2 for ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
Figure 3 for ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
Figure 4 for ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
Viaarxiv icon

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications

Add code
Bookmark button
Alert button
Oct 29, 2019
Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot

Figure 1 for Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Figure 2 for Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Figure 3 for Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Figure 4 for Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications
Viaarxiv icon

High-Fidelity Extraction of Neural Network Models

Add code
Bookmark button
Alert button
Sep 03, 2019
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot

Figure 1 for High-Fidelity Extraction of Neural Network Models
Figure 2 for High-Fidelity Extraction of Neural Network Models
Figure 3 for High-Fidelity Extraction of Neural Network Models
Figure 4 for High-Fidelity Extraction of Neural Network Models
Viaarxiv icon

Stateful Detection of Black-Box Adversarial Attacks

Add code
Bookmark button
Alert button
Jul 12, 2019
Steven Chen, Nicholas Carlini, David Wagner

Figure 1 for Stateful Detection of Black-Box Adversarial Attacks
Figure 2 for Stateful Detection of Black-Box Adversarial Attacks
Figure 3 for Stateful Detection of Black-Box Adversarial Attacks
Figure 4 for Stateful Detection of Black-Box Adversarial Attacks
Viaarxiv icon

A critique of the DeepSec Platform for Security Analysis of Deep Learning Models

Add code
Bookmark button
Alert button
May 17, 2019
Nicholas Carlini

Viaarxiv icon