Picture for Chiyuan Zhang

Chiyuan Zhang

Sparsity-Preserving Differentially Private Training of Large Embedding Models

Add code
Nov 14, 2023
Figure 1 for Sparsity-Preserving Differentially Private Training of Large Embedding Models
Figure 2 for Sparsity-Preserving Differentially Private Training of Large Embedding Models
Figure 3 for Sparsity-Preserving Differentially Private Training of Large Embedding Models
Figure 4 for Sparsity-Preserving Differentially Private Training of Large Embedding Models
Viaarxiv icon

User-Level Differential Privacy With Few Examples Per User

Add code
Sep 21, 2023
Figure 1 for User-Level Differential Privacy With Few Examples Per User
Figure 2 for User-Level Differential Privacy With Few Examples Per User
Viaarxiv icon

Can Neural Network Memorization Be Localized?

Add code
Jul 18, 2023
Figure 1 for Can Neural Network Memorization Be Localized?
Figure 2 for Can Neural Network Memorization Be Localized?
Figure 3 for Can Neural Network Memorization Be Localized?
Figure 4 for Can Neural Network Memorization Be Localized?
Viaarxiv icon

Ticketed Learning-Unlearning Schemes

Add code
Jun 27, 2023
Figure 1 for Ticketed Learning-Unlearning Schemes
Figure 2 for Ticketed Learning-Unlearning Schemes
Figure 3 for Ticketed Learning-Unlearning Schemes
Figure 4 for Ticketed Learning-Unlearning Schemes
Viaarxiv icon

On User-Level Private Convex Optimization

Add code
May 08, 2023
Figure 1 for On User-Level Private Convex Optimization
Viaarxiv icon

Regression with Label Differential Privacy

Add code
Dec 12, 2022
Figure 1 for Regression with Label Differential Privacy
Figure 2 for Regression with Label Differential Privacy
Figure 3 for Regression with Label Differential Privacy
Figure 4 for Regression with Label Differential Privacy
Viaarxiv icon

Private Ad Modeling with DP-SGD

Add code
Nov 21, 2022
Figure 1 for Private Ad Modeling with DP-SGD
Figure 2 for Private Ad Modeling with DP-SGD
Figure 3 for Private Ad Modeling with DP-SGD
Figure 4 for Private Ad Modeling with DP-SGD
Viaarxiv icon

Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy

Add code
Oct 31, 2022
Figure 1 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 2 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 3 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Figure 4 for Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy
Viaarxiv icon

Measuring Forgetting of Memorized Training Examples

Add code
Jun 30, 2022
Figure 1 for Measuring Forgetting of Memorized Training Examples
Figure 2 for Measuring Forgetting of Memorized Training Examples
Figure 3 for Measuring Forgetting of Memorized Training Examples
Figure 4 for Measuring Forgetting of Memorized Training Examples
Viaarxiv icon

The Privacy Onion Effect: Memorization is Relative

Add code
Jun 22, 2022
Figure 1 for The Privacy Onion Effect: Memorization is Relative
Figure 2 for The Privacy Onion Effect: Memorization is Relative
Figure 3 for The Privacy Onion Effect: Memorization is Relative
Figure 4 for The Privacy Onion Effect: Memorization is Relative
Viaarxiv icon