Alert button
Picture for Nicholas Carlini

Nicholas Carlini

Alert button

AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation

Add code
Bookmark button
Alert button
Jun 08, 2021
David Berthelot, Rebecca Roelofs, Kihyuk Sohn, Nicholas Carlini, Alex Kurakin

Figure 1 for AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
Figure 2 for AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
Figure 3 for AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
Figure 4 for AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
Viaarxiv icon

Handcrafted Backdoors in Deep Neural Networks

Add code
Bookmark button
Alert button
Jun 08, 2021
Sanghyun Hong, Nicholas Carlini, Alexey Kurakin

Figure 1 for Handcrafted Backdoors in Deep Neural Networks
Figure 2 for Handcrafted Backdoors in Deep Neural Networks
Figure 3 for Handcrafted Backdoors in Deep Neural Networks
Figure 4 for Handcrafted Backdoors in Deep Neural Networks
Viaarxiv icon

Poisoning the Unlabeled Dataset of Semi-Supervised Learning

Add code
Bookmark button
Alert button
May 04, 2021
Nicholas Carlini

Figure 1 for Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Figure 2 for Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Figure 3 for Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Figure 4 for Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Viaarxiv icon

Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning

Add code
Bookmark button
Alert button
Jan 11, 2021
Milad Nasr, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Nicholas Carlini

Figure 1 for Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Figure 2 for Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Figure 3 for Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Figure 4 for Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Viaarxiv icon

Extracting Training Data from Large Language Models

Add code
Bookmark button
Alert button
Dec 14, 2020
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel

Figure 1 for Extracting Training Data from Large Language Models
Figure 2 for Extracting Training Data from Large Language Models
Figure 3 for Extracting Training Data from Large Language Models
Figure 4 for Extracting Training Data from Large Language Models
Viaarxiv icon

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?

Add code
Bookmark button
Alert button
Nov 10, 2020
Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramer

Figure 1 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 2 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 3 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 4 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Viaarxiv icon

Erratum Concerning the Obfuscated Gradients Attack on Stochastic Activation Pruning

Add code
Bookmark button
Alert button
Sep 30, 2020
Guneet S. Dhillon, Nicholas Carlini

Viaarxiv icon

A Partial Break of the Honeypots Defense to Catch Adversarial Attacks

Add code
Bookmark button
Alert button
Sep 23, 2020
Nicholas Carlini

Viaarxiv icon

Label-Only Membership Inference Attacks

Add code
Bookmark button
Alert button
Jul 28, 2020
Christopher A. Choquette Choo, Florian Tramer, Nicholas Carlini, Nicolas Papernot

Figure 1 for Label-Only Membership Inference Attacks
Figure 2 for Label-Only Membership Inference Attacks
Figure 3 for Label-Only Membership Inference Attacks
Figure 4 for Label-Only Membership Inference Attacks
Viaarxiv icon

Measuring Robustness to Natural Distribution Shifts in Image Classification

Add code
Bookmark button
Alert button
Jul 01, 2020
Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht, Ludwig Schmidt

Figure 1 for Measuring Robustness to Natural Distribution Shifts in Image Classification
Figure 2 for Measuring Robustness to Natural Distribution Shifts in Image Classification
Figure 3 for Measuring Robustness to Natural Distribution Shifts in Image Classification
Figure 4 for Measuring Robustness to Natural Distribution Shifts in Image Classification
Viaarxiv icon