Picture for Mario Fritz

Mario Fritz

Data Forensics in Diffusion Models: A Systematic Analysis of Membership Privacy

Add code
Feb 15, 2023
Viaarxiv icon

Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models

Add code
Feb 08, 2023
Figure 1 for Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
Figure 2 for Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
Figure 3 for Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
Figure 4 for Systematically Finding Security Vulnerabilities in Black-Box Code Generation Models
Viaarxiv icon

Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy

Add code
Feb 02, 2023
Figure 1 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 2 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 3 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Figure 4 for Fed-GLOSS-DP: Federated, Global Learning using Synthetic Sets with Record Level Differential Privacy
Viaarxiv icon

Holistically Explainable Vision Transformers

Add code
Jan 20, 2023
Figure 1 for Holistically Explainable Vision Transformers
Figure 2 for Holistically Explainable Vision Transformers
Figure 3 for Holistically Explainable Vision Transformers
Figure 4 for Holistically Explainable Vision Transformers
Viaarxiv icon

Private Set Generation with Discriminative Information

Add code
Nov 07, 2022
Figure 1 for Private Set Generation with Discriminative Information
Figure 2 for Private Set Generation with Discriminative Information
Figure 3 for Private Set Generation with Discriminative Information
Figure 4 for Private Set Generation with Discriminative Information
Viaarxiv icon

SimSCOOD: Systematic Analysis of Out-of-Distribution Behavior of Source Code Models

Add code
Oct 10, 2022
Figure 1 for SimSCOOD: Systematic Analysis of Out-of-Distribution Behavior of Source Code Models
Figure 2 for SimSCOOD: Systematic Analysis of Out-of-Distribution Behavior of Source Code Models
Figure 3 for SimSCOOD: Systematic Analysis of Out-of-Distribution Behavior of Source Code Models
Figure 4 for SimSCOOD: Systematic Analysis of Out-of-Distribution Behavior of Source Code Models
Viaarxiv icon

UnGANable: Defending Against GAN-based Face Manipulation

Add code
Oct 03, 2022
Figure 1 for UnGANable: Defending Against GAN-based Face Manipulation
Figure 2 for UnGANable: Defending Against GAN-based Face Manipulation
Figure 3 for UnGANable: Defending Against GAN-based Face Manipulation
Figure 4 for UnGANable: Defending Against GAN-based Face Manipulation
Viaarxiv icon

Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems

Add code
Sep 07, 2022
Figure 1 for Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Figure 2 for Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Figure 3 for Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Figure 4 for Fact-Saboteurs: A Taxonomy of Evidence Manipulation Attacks against Fact-Verification Systems
Viaarxiv icon

RelaxLoss: Defending Membership Inference Attacks without Losing Utility

Add code
Jul 12, 2022
Figure 1 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 2 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 3 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Figure 4 for RelaxLoss: Defending Membership Inference Attacks without Losing Utility
Viaarxiv icon

B-cos Networks: Alignment is All We Need for Interpretability

Add code
May 20, 2022
Figure 1 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 2 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 3 for B-cos Networks: Alignment is All We Need for Interpretability
Figure 4 for B-cos Networks: Alignment is All We Need for Interpretability
Viaarxiv icon