Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Liam Fowl

Adversarial Examples Make Strong Poisons


Jun 21, 2021
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein


  Access Paper or Ask Questions

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch


Jun 16, 2021
Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, Tom Goldstein


  Access Paper or Ask Questions

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release


Mar 05, 2021
Liam Fowl, Ping-yeh Chiang, Micah Goldblum, Jonas Geiping, Arpit Bansal, Wojtek Czaja, Tom Goldstein


  Access Paper or Ask Questions

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations


Mar 02, 2021
Eitan Borgnia, Jonas Geiping, Valeriia Cherepanova, Liam Fowl, Arjun Gupta, Amin Ghiasi, Furong Huang, Micah Goldblum, Tom Goldstein

* 11 pages, 5 figures 

  Access Paper or Ask Questions

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors


Feb 26, 2021
Jonas Geiping, Liam Fowl, Gowthami Somepalli, Micah Goldblum, Michael Moeller, Tom Goldstein

* 17 pages, 14 figures 

  Access Paper or Ask Questions

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff


Nov 18, 2020
Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, Arjun Gupta

* Authors ordered alphabetically 

  Access Paper or Ask Questions

Random Network Distillation as a Diversity Metric for Both Image and Text Generation


Oct 13, 2020
Liam Fowl, Micah Goldblum, Arjun Gupta, Amr Sharaf, Tom Goldstein


  Access Paper or Ask Questions

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching


Sep 04, 2020
Jonas Geiping, Liam Fowl, W. Ronny Huang, Wojciech Czaja, Gavin Taylor, Michael Moeller, Tom Goldstein

* First two authors contributed equally. Last two authors contributed equally. 21 pages, 11 figures 

  Access Paper or Ask Questions

Headless Horseman: Adversarial Attacks on Transfer Learning Models


Apr 20, 2020
Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu

* 5 pages, 2 figures. Accepted in ICASSP 2020. Code available on https://github.com/zhuchen03/headless-attack.git 

  Access Paper or Ask Questions

MetaPoison: Practical General-purpose Clean-label Data Poisoning


Apr 01, 2020
W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein

* First two authors contributed equally 

  Access Paper or Ask Questions

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks


Mar 21, 2020
Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein


  Access Paper or Ask Questions

Robust Few-Shot Learning with Adversarially Queried Meta-Learners


Oct 02, 2019
Micah Goldblum, Liam Fowl, Tom Goldstein


  Access Paper or Ask Questions

Strong Baseline Defenses Against Clean-Label Poisoning Attacks


Sep 29, 2019
Neal Gupta, W. Ronny Huang, Liam Fowl, Chen Zhu, Soheil Feizi, Tom Goldstein, John P. Dickerson

* First two authors contributed equally 

  Access Paper or Ask Questions

Understanding Generalization through Visualizations


Jul 16, 2019
W. Ronny Huang, Zeyad Emam, Micah Goldblum, Liam Fowl, Justin K. Terry, Furong Huang, Tom Goldstein

* 8 pages (excluding acknowledgments and references), 8 figures 

  Access Paper or Ask Questions

Adversarially Robust Distillation


May 23, 2019
Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein


  Access Paper or Ask Questions