Picture for Huseyin A. Inan

Huseyin A. Inan

Differentially Private Training of Mixture of Experts Models

Add code
Feb 11, 2024
Figure 1 for Differentially Private Training of Mixture of Experts Models
Figure 2 for Differentially Private Training of Mixture of Experts Models
Viaarxiv icon

Privately Aligning Language Models with Reinforcement Learning

Add code
Oct 25, 2023
Viaarxiv icon

Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation

Add code
Sep 21, 2023
Figure 1 for Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Figure 2 for Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Figure 3 for Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Figure 4 for Privacy-Preserving In-Context Learning with Differentially Private Few-Shot Generation
Viaarxiv icon

Planting and Mitigating Memorized Content in Predictive-Text Language Models

Add code
Dec 16, 2022
Figure 1 for Planting and Mitigating Memorized Content in Predictive-Text Language Models
Figure 2 for Planting and Mitigating Memorized Content in Predictive-Text Language Models
Figure 3 for Planting and Mitigating Memorized Content in Predictive-Text Language Models
Figure 4 for Planting and Mitigating Memorized Content in Predictive-Text Language Models
Viaarxiv icon

Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe

Add code
Oct 25, 2022
Figure 1 for Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
Figure 2 for Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
Figure 3 for Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
Figure 4 for Synthetic Text Generation with Differential Privacy: A Simple and Practical Recipe
Viaarxiv icon

When Does Differentially Private Learning Not Suffer in High Dimensions?

Add code
Jul 09, 2022
Figure 1 for When Does Differentially Private Learning Not Suffer in High Dimensions?
Figure 2 for When Does Differentially Private Learning Not Suffer in High Dimensions?
Figure 3 for When Does Differentially Private Learning Not Suffer in High Dimensions?
Figure 4 for When Does Differentially Private Learning Not Suffer in High Dimensions?
Viaarxiv icon

Privacy Leakage in Text Classification: A Data Extraction Approach

Add code
Jun 09, 2022
Figure 1 for Privacy Leakage in Text Classification: A Data Extraction Approach
Figure 2 for Privacy Leakage in Text Classification: A Data Extraction Approach
Figure 3 for Privacy Leakage in Text Classification: A Data Extraction Approach
Figure 4 for Privacy Leakage in Text Classification: A Data Extraction Approach
Viaarxiv icon

Differentially Private Fine-tuning of Language Models

Add code
Oct 13, 2021
Figure 1 for Differentially Private Fine-tuning of Language Models
Figure 2 for Differentially Private Fine-tuning of Language Models
Figure 3 for Differentially Private Fine-tuning of Language Models
Figure 4 for Differentially Private Fine-tuning of Language Models
Viaarxiv icon

Membership Inference on Word Embedding and Beyond

Add code
Jun 21, 2021
Figure 1 for Membership Inference on Word Embedding and Beyond
Figure 2 for Membership Inference on Word Embedding and Beyond
Figure 3 for Membership Inference on Word Embedding and Beyond
Figure 4 for Membership Inference on Word Embedding and Beyond
Viaarxiv icon

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models

Add code
Mar 12, 2021
Figure 1 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 2 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 3 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 4 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Viaarxiv icon