Alert button
Picture for Michael Backes

Michael Backes

Alert button

Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders

Add code
Bookmark button
Alert button
Jan 19, 2022
Zeyang Sha, Xinlei He, Ning Yu, Michael Backes, Yang Zhang

Figure 1 for Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Figure 2 for Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Figure 3 for Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Figure 4 for Can't Steal? Cont-Steal! Contrastive Stealing Attacks Against Image Encoders
Viaarxiv icon

Get a Model! Model Hijacking Attack Against Machine Learning Models

Add code
Bookmark button
Alert button
Nov 08, 2021
Ahmed Salem, Michael Backes, Yang Zhang

Figure 1 for Get a Model! Model Hijacking Attack Against Machine Learning Models
Figure 2 for Get a Model! Model Hijacking Attack Against Machine Learning Models
Figure 3 for Get a Model! Model Hijacking Attack Against Machine Learning Models
Figure 4 for Get a Model! Model Hijacking Attack Against Machine Learning Models
Viaarxiv icon

Inference Attacks Against Graph Neural Networks

Add code
Bookmark button
Alert button
Oct 06, 2021
Zhikun Zhang, Min Chen, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Inference Attacks Against Graph Neural Networks
Figure 2 for Inference Attacks Against Graph Neural Networks
Figure 3 for Inference Attacks Against Graph Neural Networks
Figure 4 for Inference Attacks Against Graph Neural Networks
Viaarxiv icon

Mental Models of Adversarial Machine Learning

Add code
Bookmark button
Alert button
May 08, 2021
Lukas Bieringer, Kathrin Grosse, Michael Backes, Katharina Krombholz

Figure 1 for Mental Models of Adversarial Machine Learning
Figure 2 for Mental Models of Adversarial Machine Learning
Figure 3 for Mental Models of Adversarial Machine Learning
Figure 4 for Mental Models of Adversarial Machine Learning
Viaarxiv icon

Graph Unlearning

Add code
Bookmark button
Alert button
Mar 27, 2021
Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang

Figure 1 for Graph Unlearning
Figure 2 for Graph Unlearning
Figure 3 for Graph Unlearning
Figure 4 for Graph Unlearning
Viaarxiv icon

Node-Level Membership Inference Attacks Against Graph Neural Networks

Add code
Bookmark button
Alert button
Feb 10, 2021
Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Node-Level Membership Inference Attacks Against Graph Neural Networks
Figure 2 for Node-Level Membership Inference Attacks Against Graph Neural Networks
Figure 3 for Node-Level Membership Inference Attacks Against Graph Neural Networks
Figure 4 for Node-Level Membership Inference Attacks Against Graph Neural Networks
Viaarxiv icon

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Add code
Bookmark button
Alert button
Feb 04, 2021
Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang

Figure 1 for ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Figure 2 for ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Figure 3 for ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Figure 4 for ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Viaarxiv icon

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

Add code
Bookmark button
Alert button
Oct 08, 2020
Ahmed Salem, Yannick Sautter, Michael Backes, Mathias Humbert, Yang Zhang

Figure 1 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 2 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 3 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Figure 4 for BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
Viaarxiv icon

Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 07, 2020
Ahmed Salem, Michael Backes, Yang Zhang

Figure 1 for Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
Figure 2 for Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
Figure 3 for Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
Figure 4 for Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks
Viaarxiv icon

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

Add code
Bookmark button
Alert button
Sep 10, 2020
Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang

Figure 1 for Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Figure 2 for Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Figure 3 for Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Figure 4 for Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning
Viaarxiv icon