Picture for Lukas Struppek

Lukas Struppek

Finding Dori: Memorization in Text-to-Image Diffusion Models Is Less Local Than Assumed

Add code
Jul 22, 2025
Viaarxiv icon

Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation

Add code
Dec 06, 2024
Figure 1 for Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation
Figure 2 for Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation
Figure 3 for Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation
Figure 4 for Navigating Shortcuts, Spurious Correlations, and Confounders: From Origins via Detection to Mitigation
Viaarxiv icon

CollaFuse: Collaborative Diffusion Models

Add code
Jun 20, 2024
Figure 1 for CollaFuse: Collaborative Diffusion Models
Figure 2 for CollaFuse: Collaborative Diffusion Models
Figure 3 for CollaFuse: Collaborative Diffusion Models
Figure 4 for CollaFuse: Collaborative Diffusion Models
Viaarxiv icon

Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models

Add code
Jun 04, 2024
Figure 1 for Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
Figure 2 for Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
Figure 3 for Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
Figure 4 for Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models
Viaarxiv icon

CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI

Add code
Feb 29, 2024
Figure 1 for CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI
Figure 2 for CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI
Figure 3 for CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI
Viaarxiv icon

Exploring the Adversarial Capabilities of Large Language Models

Add code
Feb 15, 2024
Figure 1 for Exploring the Adversarial Capabilities of Large Language Models
Figure 2 for Exploring the Adversarial Capabilities of Large Language Models
Figure 3 for Exploring the Adversarial Capabilities of Large Language Models
Figure 4 for Exploring the Adversarial Capabilities of Large Language Models
Viaarxiv icon

Defending Our Privacy With Backdoors

Add code
Oct 12, 2023
Figure 1 for Defending Our Privacy With Backdoors
Figure 2 for Defending Our Privacy With Backdoors
Figure 3 for Defending Our Privacy With Backdoors
Figure 4 for Defending Our Privacy With Backdoors
Viaarxiv icon

Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks

Add code
Oct 10, 2023
Figure 1 for Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Figure 2 for Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Figure 3 for Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Figure 4 for Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
Viaarxiv icon

Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data

Add code
Oct 10, 2023
Viaarxiv icon

Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models

Add code
Aug 18, 2023
Figure 1 for Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Figure 2 for Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Figure 3 for Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Figure 4 for Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
Viaarxiv icon