Get our free extension to see links to code for papers anywhere online!

 Add to Chrome

 Add to Firefox

CatalyzeX Code Finder - Browser extension linking code for ML papers across the web! | Product Hunt Embed
Node-Level Membership Inference Attacks Against Graph Neural Networks

Feb 10, 2021
Xinlei He, Rui Wen, Yixin Wu, Michael Backes, Yun Shen, Yang Zhang


  Access Paper or Ask Questions

ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models

Feb 04, 2021
Yugeng Liu, Rui Wen, Xinlei He, Ahmed Salem, Zhikun Zhang, Michael Backes, Emiliano De Cristofaro, Mario Fritz, Yang Zhang


  Access Paper or Ask Questions

BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models

Oct 08, 2020
Ahmed Salem, Yannick Sautter, Michael Backes, Mathias Humbert, Yang Zhang


  Access Paper or Ask Questions

Don't Trigger Me! A Triggerless Backdoor Attack Against Deep Neural Networks

Oct 07, 2020
Ahmed Salem, Michael Backes, Yang Zhang


  Access Paper or Ask Questions

Privacy Analysis of Deep Learning in the Wild: Membership Inference Attacks against Transfer Learning

Sep 10, 2020
Yang Zou, Zhikun Zhang, Michael Backes, Yang Zhang


  Access Paper or Ask Questions

Adversarial Examples and Metrics

Jul 15, 2020
Nico Döttling, Kathrin Grosse, Michael Backes, Ian Molloy

* 25 pages, 1 figure, under submission, fixe typos from previous version 

  Access Paper or Ask Questions

A new measure for overfitting and its implications for backdooring of deep learning

Jun 18, 2020
Kathrin Grosse, Taesung Lee, Youngja Park, Michael Backes, Ian Molloy

* 11 pages, 10 figures, under submission, (updated contact information) 

  Access Paper or Ask Questions

How many winning tickets are there in one DNN?

Jun 12, 2020
Kathrin Grosse, Michael Backes

* 17 pages, 15 figures, under submission 

  Access Paper or Ask Questions

BadNL: Backdoor Attacks Against NLP Models

Jun 01, 2020
Xiaoyi Chen, Ahmed Salem, Michael Backes, Shiqing Ma, Yang Zhang


  Access Paper or Ask Questions

When Machine Unlearning Jeopardizes Privacy

May 05, 2020
Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang


  Access Paper or Ask Questions

Stealing Links from Graph Neural Networks

May 05, 2020
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, Yang Zhang


  Access Paper or Ask Questions

Dynamic Backdoor Attacks Against Machine Learning Models

Mar 07, 2020
Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang


  Access Paper or Ask Questions

MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples

Sep 26, 2019
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, Neil Zhenqiang Gong

* To appear in CCS'19 

  Access Paper or Ask Questions

Adversarial Vulnerability Bounds for Gaussian Process Classification

Sep 19, 2019
Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A Alvarez

* 10 pages + 2 pages references + 7 pages of supplementary. 12 figures. Submitted to AAAI 

  Access Paper or Ask Questions

Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning

Apr 01, 2019
Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, Yang Zhang


  Access Paper or Ask Questions

Adversarial Initialization -- when your network performs the way I want

Feb 08, 2019
Kathrin Grosse, Thomas A. Trost, Marius Mosbach, Michael Backes, Dietrich Klakow

* 16 pages, 20 figures 

  Access Paper or Ask Questions

The Limitations of Model Uncertainty in Adversarial Settings

Dec 06, 2018
Kathrin Grosse, David Pfaff, Michael T. Smith, Michael Backes

* 14 pages, 9 figures, 2 tables 

  Access Paper or Ask Questions

MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

Aug 01, 2018
Lucjan Hanzlik, Yang Zhang, Kathrin Grosse, Ahmed Salem, Max Augustin, Michael Backes, Mario Fritz


  Access Paper or Ask Questions

Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification

Jun 06, 2018
Kathrin Grosse, Michael T. Smith, Michael Backes

* 15 pages, 5 tables, 12 figures 

  Access Paper or Ask Questions

ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models

Jun 04, 2018
Ahmed Salem, Yang Zhang, Mathias Humbert, Mario Fritz, Michael Backes


  Access Paper or Ask Questions

How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models

Feb 16, 2018
Kathrin Grosse, David Pfaff, Michael Thomas Smith, Michael Backes

* 8 pages, 7 pages appendix, 8 figures and 13 tables; improved writing and figures 

  Access Paper or Ask Questions

On the (Statistical) Detection of Adversarial Examples

Oct 17, 2017
Kathrin Grosse, Praveen Manoharan, Nicolas Papernot, Michael Backes, Patrick McDaniel

* 13 pages, 4 figures, 5 tables. New version: improved writing, incorporating external feedback 

  Access Paper or Ask Questions

Simulated Penetration Testing and Mitigation Analysis

May 15, 2017
Michael Backes, Jörg Hoffmann, Robert Künnemann, Patrick Speicher, Marcel Steinmetz


  Access Paper or Ask Questions

Adversarial Perturbations Against Deep Neural Networks for Malware Classification

Jun 16, 2016
Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, Patrick McDaniel

* version update: correcting typos, incorporating external feedback 

  Access Paper or Ask Questions