Picture for Lukas Wutschitz

Lukas Wutschitz

Microsoft

ACON: Optimizing Context Compression for Long-horizon LLM Agents

Add code
Oct 01, 2025
Viaarxiv icon

Securing AI Agents with Information-Flow Control

Add code
May 29, 2025
Viaarxiv icon

The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text

Add code
Feb 19, 2025
Figure 1 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 2 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 3 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Figure 4 for The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic Text
Viaarxiv icon

Permissive Information-Flow Analysis for Large Language Models

Add code
Oct 04, 2024
Figure 1 for Permissive Information-Flow Analysis for Large Language Models
Figure 2 for Permissive Information-Flow Analysis for Large Language Models
Figure 3 for Permissive Information-Flow Analysis for Large Language Models
Figure 4 for Permissive Information-Flow Analysis for Large Language Models
Viaarxiv icon

Closed-Form Bounds for DP-SGD against Record-level Inference

Add code
Feb 22, 2024
Figure 1 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 2 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 3 for Closed-Form Bounds for DP-SGD against Record-level Inference
Figure 4 for Closed-Form Bounds for DP-SGD against Record-level Inference
Viaarxiv icon

Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective

Add code
Nov 27, 2023
Figure 1 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 2 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 3 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Figure 4 for Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control Perspective
Viaarxiv icon

Analyzing Leakage of Personally Identifiable Information in Language Models

Add code
Feb 01, 2023
Figure 1 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 2 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 3 for Analyzing Leakage of Personally Identifiable Information in Language Models
Figure 4 for Analyzing Leakage of Personally Identifiable Information in Language Models
Viaarxiv icon

Bayesian Estimation of Differential Privacy

Add code
Jun 15, 2022
Figure 1 for Bayesian Estimation of Differential Privacy
Figure 2 for Bayesian Estimation of Differential Privacy
Figure 3 for Bayesian Estimation of Differential Privacy
Figure 4 for Bayesian Estimation of Differential Privacy
Viaarxiv icon

Differentially Private Model Compression

Add code
Jun 03, 2022
Figure 1 for Differentially Private Model Compression
Figure 2 for Differentially Private Model Compression
Figure 3 for Differentially Private Model Compression
Figure 4 for Differentially Private Model Compression
Viaarxiv icon

Differentially Private Fine-tuning of Language Models

Add code
Oct 13, 2021
Figure 1 for Differentially Private Fine-tuning of Language Models
Figure 2 for Differentially Private Fine-tuning of Language Models
Figure 3 for Differentially Private Fine-tuning of Language Models
Figure 4 for Differentially Private Fine-tuning of Language Models
Viaarxiv icon