Picture for Lukas Wutschitz

Lukas Wutschitz

Microsoft

Closed-Form Bounds for DP-SGD against Record-level Inference

Add code
Feb 22, 2024
Viaarxiv icon

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Add code
Nov 27, 2023
Figure 1 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 2 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 3 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 4 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Feb 01, 2023
Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

Bayesian Estimation of Differential Privacy

Add code
Jun 15, 2022
Figure 1 for Bayesian Estimation of Differential Privacy
Figure 2 for Bayesian Estimation of Differential Privacy
Figure 3 for Bayesian Estimation of Differential Privacy
Figure 4 for Bayesian Estimation of Differential Privacy
Viaarxiv icon

Differentially Private Model Compression

Add code
Jun 03, 2022
Figure 1 for Differentially Private Model Compression
Figure 2 for Differentially Private Model Compression
Figure 3 for Differentially Private Model Compression
Figure 4 for Differentially Private Model Compression
Viaarxiv icon

Differentially Private Fine-tuning of Language Models

Add code
Oct 13, 2021
Figure 1 for Differentially Private Fine-tuning of Language Models
Figure 2 for Differentially Private Fine-tuning of Language Models
Figure 3 for Differentially Private Fine-tuning of Language Models
Figure 4 for Differentially Private Fine-tuning of Language Models
Viaarxiv icon

Numerical Composition of Differential Privacy

Add code
Jun 29, 2021
Figure 1 for Numerical Composition of Differential Privacy
Figure 2 for Numerical Composition of Differential Privacy
Figure 3 for Numerical Composition of Differential Privacy
Figure 4 for Numerical Composition of Differential Privacy
Viaarxiv icon

Privacy Analysis in Language Models via Training Data Leakage Report

Add code
Jan 14, 2021
Figure 1 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 2 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 3 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 4 for Privacy Analysis in Language Models via Training Data Leakage Report
Viaarxiv icon