Picture for Nicholas Carlini

Nicholas Carlini

Dj

Part-Based Models Improve Adversarial Robustness

Add code
Sep 15, 2022
Figure 1 for Part-Based Models Improve Adversarial Robustness
Figure 2 for Part-Based Models Improve Adversarial Robustness
Figure 3 for Part-Based Models Improve Adversarial Robustness
Figure 4 for Part-Based Models Improve Adversarial Robustness
Viaarxiv icon

Measuring Forgetting of Memorized Training Examples

Add code
Jun 30, 2022
Figure 1 for Measuring Forgetting of Memorized Training Examples
Figure 2 for Measuring Forgetting of Memorized Training Examples
Figure 3 for Measuring Forgetting of Memorized Training Examples
Figure 4 for Measuring Forgetting of Memorized Training Examples
Viaarxiv icon

Increasing Confidence in Adversarial Robustness Evaluations

Add code
Jun 28, 2022
Figure 1 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 2 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 3 for Increasing Confidence in Adversarial Robustness Evaluations
Figure 4 for Increasing Confidence in Adversarial Robustness Evaluations
Viaarxiv icon

The Privacy Onion Effect: Memorization is Relative

Add code
Jun 22, 2022
Figure 1 for The Privacy Onion Effect: Memorization is Relative
Figure 2 for The Privacy Onion Effect: Memorization is Relative
Figure 3 for The Privacy Onion Effect: Memorization is Relative
Figure 4 for The Privacy Onion Effect: Memorization is Relative
Viaarxiv icon

(Certified!!) Adversarial Robustness for Free!

Add code
Jun 21, 2022
Figure 1 for (Certified!!) Adversarial Robustness for Free!
Figure 2 for (Certified!!) Adversarial Robustness for Free!
Figure 3 for (Certified!!) Adversarial Robustness for Free!
Figure 4 for (Certified!!) Adversarial Robustness for Free!
Viaarxiv icon

Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets

Add code
Mar 31, 2022
Figure 1 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 2 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 3 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Figure 4 for Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Viaarxiv icon

Debugging Differential Privacy: A Case Study for Privacy Auditing

Add code
Mar 28, 2022
Figure 1 for Debugging Differential Privacy: A Case Study for Privacy Auditing
Viaarxiv icon

Quantifying Memorization Across Neural Language Models

Add code
Feb 24, 2022
Figure 1 for Quantifying Memorization Across Neural Language Models
Figure 2 for Quantifying Memorization Across Neural Language Models
Figure 3 for Quantifying Memorization Across Neural Language Models
Figure 4 for Quantifying Memorization Across Neural Language Models
Viaarxiv icon

Counterfactual Memorization in Neural Language Models

Add code
Dec 24, 2021
Figure 1 for Counterfactual Memorization in Neural Language Models
Figure 2 for Counterfactual Memorization in Neural Language Models
Figure 3 for Counterfactual Memorization in Neural Language Models
Figure 4 for Counterfactual Memorization in Neural Language Models
Viaarxiv icon

Membership Inference Attacks From First Principles

Add code
Dec 07, 2021
Figure 1 for Membership Inference Attacks From First Principles
Figure 2 for Membership Inference Attacks From First Principles
Figure 3 for Membership Inference Attacks From First Principles
Figure 4 for Membership Inference Attacks From First Principles
Viaarxiv icon