Alert button
Picture for Nicholas Carlini

Nicholas Carlini

Alert button

Are aligned neural networks adversarially aligned?

Add code
Bookmark button
Alert button
Jun 26, 2023
Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramer, Ludwig Schmidt

Figure 1 for Are aligned neural networks adversarially aligned?
Figure 2 for Are aligned neural networks adversarially aligned?
Figure 3 for Are aligned neural networks adversarially aligned?
Figure 4 for Are aligned neural networks adversarially aligned?
Viaarxiv icon

Evading Black-box Classifiers Without Breaking Eggs

Add code
Bookmark button
Alert button
Jun 05, 2023
Edoardo Debenedetti, Nicholas Carlini, Florian Tramèr

Figure 1 for Evading Black-box Classifiers Without Breaking Eggs
Figure 2 for Evading Black-box Classifiers Without Breaking Eggs
Figure 3 for Evading Black-box Classifiers Without Breaking Eggs
Figure 4 for Evading Black-box Classifiers Without Breaking Eggs
Viaarxiv icon

Students Parrot Their Teachers: Membership Inference on Model Distillation

Add code
Bookmark button
Alert button
Mar 06, 2023
Matthew Jagielski, Milad Nasr, Christopher Choquette-Choo, Katherine Lee, Nicholas Carlini

Figure 1 for Students Parrot Their Teachers: Membership Inference on Model Distillation
Figure 2 for Students Parrot Their Teachers: Membership Inference on Model Distillation
Figure 3 for Students Parrot Their Teachers: Membership Inference on Model Distillation
Figure 4 for Students Parrot Their Teachers: Membership Inference on Model Distillation
Viaarxiv icon

Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators

Add code
Bookmark button
Alert button
Feb 27, 2023
Keane Lucas, Matthew Jagielski, Florian Tramèr, Lujo Bauer, Nicholas Carlini

Figure 1 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 2 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 3 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Figure 4 for Randomness in ML Defenses Helps Persistent Attackers and Hinders Evaluators
Viaarxiv icon

Poisoning Web-Scale Training Datasets is Practical

Add code
Bookmark button
Alert button
Feb 20, 2023
Nicholas Carlini, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, Florian Tramèr

Figure 1 for Poisoning Web-Scale Training Datasets is Practical
Figure 2 for Poisoning Web-Scale Training Datasets is Practical
Figure 3 for Poisoning Web-Scale Training Datasets is Practical
Figure 4 for Poisoning Web-Scale Training Datasets is Practical
Viaarxiv icon

Tight Auditing of Differentially Private Machine Learning

Add code
Bookmark button
Alert button
Feb 15, 2023
Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini, Andreas Terzis

Figure 1 for Tight Auditing of Differentially Private Machine Learning
Figure 2 for Tight Auditing of Differentially Private Machine Learning
Figure 3 for Tight Auditing of Differentially Private Machine Learning
Figure 4 for Tight Auditing of Differentially Private Machine Learning
Viaarxiv icon

Effective Robustness against Natural Distribution Shifts for Models with Different Training Data

Add code
Bookmark button
Alert button
Feb 02, 2023
Zhouxing Shi, Nicholas Carlini, Ananth Balashankar, Ludwig Schmidt, Cho-Jui Hsieh, Alex Beutel, Yao Qin

Figure 1 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 2 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 3 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Figure 4 for Effective Robustness against Natural Distribution Shifts for Models with Different Training Data
Viaarxiv icon

Extracting Training Data from Diffusion Models

Add code
Bookmark button
Alert button
Jan 30, 2023
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, Eric Wallace

Figure 1 for Extracting Training Data from Diffusion Models
Figure 2 for Extracting Training Data from Diffusion Models
Figure 3 for Extracting Training Data from Diffusion Models
Figure 4 for Extracting Training Data from Diffusion Models
Viaarxiv icon

Publishing Efficient On-device Models Increases Adversarial Vulnerability

Add code
Bookmark button
Alert button
Dec 28, 2022
Sanghyun Hong, Nicholas Carlini, Alexey Kurakin

Figure 1 for Publishing Efficient On-device Models Increases Adversarial Vulnerability
Figure 2 for Publishing Efficient On-device Models Increases Adversarial Vulnerability
Figure 3 for Publishing Efficient On-device Models Increases Adversarial Vulnerability
Figure 4 for Publishing Efficient On-device Models Increases Adversarial Vulnerability
Viaarxiv icon

Considerations for Differentially Private Learning with Large-Scale Public Pretraining

Add code
Bookmark button
Alert button
Dec 13, 2022
Florian Tramèr, Gautam Kamath, Nicholas Carlini

Viaarxiv icon