Alert button
Picture for Mathias Humbert

Mathias Humbert

Alert button

Fine-Tuning Is All You Need to Mitigate Backdoor Attacks

Add code
Bookmark button
Alert button
Dec 18, 2022
Zeyang Sha, Xinlei He, Pascal Berrang, Mathias Humbert, Yang Zhang

Figure 1 for Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Figure 2 for Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Figure 3 for Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Figure 4 for Fine-Tuning Is All You Need to Mitigate Backdoor Attacks
Viaarxiv icon

Data Poisoning Attacks Against Multimodal Encoders

Add code
Bookmark button
Alert button
Sep 30, 2022
Ziqing Yang, Xinlei He, Zheng Li, Michael Backes, Mathias Humbert, Pascal Berrang, Yang Zhang

Figure 1 for Data Poisoning Attacks Against Multimodal Encoders
Figure 2 for Data Poisoning Attacks Against Multimodal Encoders
Figure 3 for Data Poisoning Attacks Against Multimodal Encoders
Figure 4 for Data Poisoning Attacks Against Multimodal Encoders
Viaarxiv icon

Graph Unlearning

Add code
Bookmark button
Alert button
Mar 27, 2021
Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

Figure 1 for Graph Unlearning
Figure 2 for Graph Unlearning
Figure 3 for Graph Unlearning
Figure 4 for Graph Unlearning
Viaarxiv icon

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

Add code
Bookmark button
Alert button
Oct 08, 2020
Ahmed Salem, Yannick Sautter, Michael Backes, Mathias Humbert, Yang Zhang

Figure 1 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 2 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 3 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 4 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Viaarxiv icon

When Machine Unlearning Jeopardizes Privacy

Add code
Bookmark button
Alert button
May 05, 2020
Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

Figure 1 for When Machine Unlearning Jeopardizes Privacy
Figure 2 for When Machine Unlearning Jeopardizes Privacy
Figure 3 for When Machine Unlearning Jeopardizes Privacy
Figure 4 for When Machine Unlearning Jeopardizes Privacy
Viaarxiv icon

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Add code
Bookmark button
Alert button
Jun 04, 2018
Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, Michael Backes

Figure 1 for ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
Figure 2 for ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
Figure 3 for ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
Figure 4 for ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
Viaarxiv icon