Alert button
Picture for Nicholas Carlini

Nicholas Carlini

Alert button

Dj

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy

Add code
Bookmark button
Alert button
Oct 31, 2022
Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A. Choquette-Choo, Nicholas Carlini

Figure 1 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 2 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 3 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 4 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Viaarxiv icon

Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems

Add code
Bookmark button
Alert button
Oct 07, 2022
Chawin Sitawarin, Florian Tramèr, Nicholas Carlini

Figure 1 for Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
Figure 2 for Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
Figure 3 for Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
Figure 4 for Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
Viaarxiv icon

No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"

Add code
Bookmark button
Alert button
Sep 29, 2022
Nicholas Carlini, Vitaly Feldman, Milad Nasr

Figure 1 for No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
Figure 2 for No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
Figure 3 for No Free Lunch in "Privacy for Free: How does Dataset Condensation Help Privacy"
Viaarxiv icon

Part-Based Models Improve Adversarial Robustness

Add code
Bookmark button
Alert button
Sep 15, 2022
Chawin Sitawarin, Kornrapat Pongmala, Yizheng Chen, Nicholas Carlini, David Wagner

Figure 1 for Part-Based Models Improve Adversarial Robustness
Figure 2 for Part-Based Models Improve Adversarial Robustness
Figure 3 for Part-Based Models Improve Adversarial Robustness
Figure 4 for Part-Based Models Improve Adversarial Robustness
Viaarxiv icon

Measuring Forgetting of Memorized Training Examples

Add code
Bookmark button
Alert button
Jun 30, 2022
Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot, Chiyuan Zhang

Figure 1 for Measuring Forgetting of Memorized Training Examples
Figure 2 for Measuring Forgetting of Memorized Training Examples
Figure 3 for Measuring Forgetting of Memorized Training Examples
Figure 4 for Measuring Forgetting of Memorized Training Examples
Viaarxiv icon

Increasing Confidence in Adversarial Robustness Evaluations

Add code
Bookmark button
Alert button
Jun 28, 2022
Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini

Figure 1 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 2 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 3 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 4 for Increasing Confidence in Adversarial Robustness Evaluations
Viaarxiv icon

The Privacy Onion Effect: Memorization is Relative

Add code
Bookmark button
Alert button
Jun 22, 2022
Nicholas Carlini, Matthew Jagielski, Chiyuan Zhang, Nicolas Papernot, Andreas Terzis, Florian Tramer

Figure 1 for The Privacy Onion Effect: Memorization is Relative
Figure 2 for The Privacy Onion Effect: Memorization is Relative
Figure 3 for The Privacy Onion Effect: Memorization is Relative
Figure 4 for The Privacy Onion Effect: Memorization is Relative
Viaarxiv icon

(Certified!!) Adversarial Robustness for Free!

Add code
Bookmark button
Alert button
Jun 21, 2022
Nicholas Carlini, Florian Tramer, Krishnamurthy, Dvijotham, J. Zico Kolter

Figure 1 for (Certified!!) Adversarial Robustness for Free!
Figure 2 for (Certified!!) Adversarial Robustness for Free!
Figure 3 for (Certified!!) Adversarial Robustness for Free!
Figure 4 for (Certified!!) Adversarial Robustness for Free!
Viaarxiv icon

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

Add code
Bookmark button
Alert button
Mar 31, 2022
Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, Nicholas Carlini

Figure 1 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 2 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 3 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 4 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Viaarxiv icon

Debugging Differential Privacy: A Case Study for Privacy Auditing

Add code
Bookmark button
Alert button
Mar 28, 2022
Florian Tramer, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski, Nicholas Carlini

Figure 1 for Debugging Differential Privacy: A Case Study for Privacy Auditing
Viaarxiv icon