Alert button
Picture for Lukas Wutschitz

Lukas Wutschitz

Alert button

Microsoft

Closed-Form Bounds for DP-SGD against Record-level Inference

Add code
Bookmark button
Alert button
Feb 22, 2024
Giovanni Cherubin, Boris Köpf, Andrew Paverd, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin

Viaarxiv icon

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Add code
Bookmark button
Alert button
Nov 27, 2023
Lukas Wutschitz, Boris Köpf, Andrew Paverd, Saravan Rajmohan, Ahmed Salem, Shruti Tople, Santiago Zanella-Béguelin, Menglin Xia, Victor Rühle

Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Bookmark button
Alert button
Feb 01, 2023
Nils Lukas, Ahmed Salem, Robert Sim, Shruti Tople, Lukas Wutschitz, Santiago Zanella-Béguelin

Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

Bayesian Estimation of Differential Privacy

Add code
Bookmark button
Alert button
Jun 15, 2022
Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Ahmed Salem, Victor Rühle, Andrew Paverd, Mohammad Naseri, Boris Köpf, Daniel Jones

Figure 1 for Bayesian Estimation of Differential Privacy
Figure 2 for Bayesian Estimation of Differential Privacy
Figure 3 for Bayesian Estimation of Differential Privacy
Figure 4 for Bayesian Estimation of Differential Privacy
Viaarxiv icon

Differentially Private Model Compression

Add code
Bookmark button
Alert button
Jun 03, 2022
Fatemehsadat Mireshghallah, Arturs Backurs, Huseyin A Inan, Lukas Wutschitz, Janardhan Kulkarni

Figure 1 for Differentially Private Model Compression
Figure 2 for Differentially Private Model Compression
Figure 3 for Differentially Private Model Compression
Figure 4 for Differentially Private Model Compression
Viaarxiv icon

Differentially Private Fine-tuning of Language Models

Add code
Bookmark button
Alert button
Oct 13, 2021
Da Yu, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni, Yin Tat Lee, Andre Manoel, Lukas Wutschitz, Sergey Yekhanin, Huishuai Zhang

Figure 1 for Differentially Private Fine-tuning of Language Models
Figure 2 for Differentially Private Fine-tuning of Language Models
Figure 3 for Differentially Private Fine-tuning of Language Models
Figure 4 for Differentially Private Fine-tuning of Language Models
Viaarxiv icon

Numerical Composition of Differential Privacy

Add code
Bookmark button
Alert button
Jun 29, 2021
Sivakanth Gopi, Yin Tat Lee, Lukas Wutschitz

Figure 1 for Numerical Composition of Differential Privacy
Figure 2 for Numerical Composition of Differential Privacy
Figure 3 for Numerical Composition of Differential Privacy
Figure 4 for Numerical Composition of Differential Privacy
Viaarxiv icon

Privacy Analysis in Language Models via Training Data Leakage Report

Add code
Bookmark button
Alert button
Jan 14, 2021
Huseyin A. Inan, Osman Ramadan, Lukas Wutschitz, Daniel Jones, Victor Rühle, James Withers, Robert Sim

Figure 1 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 2 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 3 for Privacy Analysis in Language Models via Training Data Leakage Report
Figure 4 for Privacy Analysis in Language Models via Training Data Leakage Report
Viaarxiv icon