Alert button
Picture for Nicholas Carlini

Nicholas Carlini

Alert button

Quantifying Memorization Across Neural Language Models

Add code
Bookmark button
Alert button
Feb 24, 2022
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, Chiyuan Zhang

Figure 1 for Quantifying Memorization Across Neural Language Models
Figure 2 for Quantifying Memorization Across Neural Language Models
Figure 3 for Quantifying Memorization Across Neural Language Models
Figure 4 for Quantifying Memorization Across Neural Language Models
Viaarxiv icon

Debugging Differential Privacy: A Case Study for Privacy Auditing

Add code
Bookmark button
Alert button
Feb 24, 2022
Florian Tramer, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini

Figure 1 for Debugging Differential Privacy: A Case Study for Privacy Auditing
Viaarxiv icon

Counterfactual Memorization in Neural Language Models

Add code
Bookmark button
Alert button
Dec 24, 2021
Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, Nicholas Carlini

Figure 1 for Counterfactual Memorization in Neural Language Models
Figure 2 for Counterfactual Memorization in Neural Language Models
Figure 3 for Counterfactual Memorization in Neural Language Models
Figure 4 for Counterfactual Memorization in Neural Language Models
Viaarxiv icon

Membership Inference Attacks From First Principles

Add code
Bookmark button
Alert button
Dec 07, 2021
Nicholas Carlini, Steve Chien, Milad Nasr, Shuang Song, Andreas Terzis, Florian Tramer

Figure 1 for Membership Inference Attacks From First Principles
Figure 2 for Membership Inference Attacks From First Principles
Figure 3 for Membership Inference Attacks From First Principles
Figure 4 for Membership Inference Attacks From First Principles
Viaarxiv icon

Unsolved Problems in ML Safety

Add code
Bookmark button
Alert button
Sep 28, 2021
Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt

Figure 1 for Unsolved Problems in ML Safety
Figure 2 for Unsolved Problems in ML Safety
Figure 3 for Unsolved Problems in ML Safety
Figure 4 for Unsolved Problems in ML Safety
Viaarxiv icon

Deduplicating Training Data Makes Language Models Better

Add code
Bookmark button
Alert button
Jul 14, 2021
Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, Nicholas Carlini

Figure 1 for Deduplicating Training Data Makes Language Models Better
Figure 2 for Deduplicating Training Data Makes Language Models Better
Figure 3 for Deduplicating Training Data Makes Language Models Better
Figure 4 for Deduplicating Training Data Makes Language Models Better
Viaarxiv icon

Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Add code
Bookmark button
Alert button
Jun 28, 2021
Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

Figure 1 for Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Figure 2 for Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Figure 3 for Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Figure 4 for Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent
Viaarxiv icon

Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples

Add code
Bookmark button
Alert button
Jun 18, 2021
Maura Pintor, Luca Demetrio, Angelo Sotgiu, Giovanni Manca, Ambra Demontis, Nicholas Carlini, Battista Biggio, Fabio Roli

Figure 1 for Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Figure 2 for Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Figure 3 for Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Figure 4 for Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Viaarxiv icon

Poisoning and Backdooring Contrastive Learning

Add code
Bookmark button
Alert button
Jun 17, 2021
Nicholas Carlini, Andreas Terzis

Figure 1 for Poisoning and Backdooring Contrastive Learning
Figure 2 for Poisoning and Backdooring Contrastive Learning
Figure 3 for Poisoning and Backdooring Contrastive Learning
Figure 4 for Poisoning and Backdooring Contrastive Learning
Viaarxiv icon