Picture for Michael Backes

Michael Backes

Memorization in Self-Supervised Learning Improves Downstream Generalization

Add code
Jan 24, 2024
Figure 1 for Memorization in Self-Supervised Learning Improves Downstream Generalization
Figure 2 for Memorization in Self-Supervised Learning Improves Downstream Generalization
Figure 3 for Memorization in Self-Supervised Learning Improves Downstream Generalization
Figure 4 for Memorization in Self-Supervised Learning Improves Downstream Generalization
Viaarxiv icon

FAKEPCD: Fake Point Cloud Detection via Source Attribution

Add code
Dec 18, 2023
Viaarxiv icon

Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models

Add code
Oct 30, 2023
Viaarxiv icon

SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

Add code
Oct 19, 2023
Viaarxiv icon

Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights

Add code
Oct 18, 2023
Figure 1 for Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
Figure 2 for Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
Figure 3 for Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
Figure 4 for Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights
Viaarxiv icon

Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning

Add code
Oct 17, 2023
Figure 1 for Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Figure 2 for Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Figure 3 for Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Figure 4 for Last One Standing: A Comparative Analysis of Security and Privacy of Soft Prompt Tuning, LoRA, and In-Context Learning
Viaarxiv icon

Provably Robust Cost-Sensitive Learning via Randomized Smoothing

Add code
Oct 12, 2023
Viaarxiv icon

Prompt Backdoors in Visual Prompt Learning

Add code
Oct 11, 2023
Figure 1 for Prompt Backdoors in Visual Prompt Learning
Figure 2 for Prompt Backdoors in Visual Prompt Learning
Figure 3 for Prompt Backdoors in Visual Prompt Learning
Figure 4 for Prompt Backdoors in Visual Prompt Learning
Viaarxiv icon

Composite Backdoor Attacks Against Large Language Models

Add code
Oct 11, 2023
Figure 1 for Composite Backdoor Attacks Against Large Language Models
Figure 2 for Composite Backdoor Attacks Against Large Language Models
Figure 3 for Composite Backdoor Attacks Against Large Language Models
Figure 4 for Composite Backdoor Attacks Against Large Language Models
Viaarxiv icon

Transferable Availability Poisoning Attacks

Add code
Oct 08, 2023
Viaarxiv icon