Alert button
Picture for Liam Fowl

Liam Fowl

Alert button

Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion

Add code
Bookmark button
Alert button
Mar 25, 2024
Hossein Souri, Arpit Bansal, Hamid Kazemi, Liam Fowl, Aniruddha Saha, Jonas Geiping, Andrew Gordon Wilson, Rama Chellappa, Tom Goldstein, Micah Goldblum

Viaarxiv icon

Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting

Add code
Bookmark button
Alert button
Nov 11, 2022
Beltrán Labrador, Guanlong Zhao, Ignacio López Moreno, Angelo Scorza Scarpati, Liam Fowl, Quan Wang

Figure 1 for Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting
Figure 2 for Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting
Figure 3 for Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting
Figure 4 for Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting
Viaarxiv icon

Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning

Add code
Bookmark button
Alert button
Oct 17, 2022
Yuxin Wen, Jonas Geiping, Liam Fowl, Hossein Souri, Rama Chellappa, Micah Goldblum, Tom Goldstein

Figure 1 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 2 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 3 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Figure 4 for Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning
Viaarxiv icon

Poisons that are learned faster are more effective

Add code
Bookmark button
Alert button
Apr 19, 2022
Pedro Sandoval-Segura, Vasu Singla, Liam Fowl, Jonas Geiping, Micah Goldblum, David Jacobs, Tom Goldstein

Figure 1 for Poisons that are learned faster are more effective
Figure 2 for Poisons that are learned faster are more effective
Figure 3 for Poisons that are learned faster are more effective
Figure 4 for Poisons that are learned faster are more effective
Viaarxiv icon

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective

Add code
Bookmark button
Alert button
Mar 15, 2022
Gowthami Somepalli, Liam Fowl, Arpit Bansal, Ping Yeh-Chiang, Yehuda Dar, Richard Baraniuk, Micah Goldblum, Tom Goldstein

Figure 1 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 2 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 3 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Figure 4 for Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective
Viaarxiv icon

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

Add code
Bookmark button
Alert button
Feb 01, 2022
Yuxin Wen, Jonas Geiping, Liam Fowl, Micah Goldblum, Tom Goldstein

Figure 1 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 2 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 3 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Figure 4 for Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification
Viaarxiv icon

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

Add code
Bookmark button
Alert button
Jan 29, 2022
Liam Fowl, Jonas Geiping, Steven Reich, Yuxin Wen, Wojtek Czaja, Micah Goldblum, Tom Goldstein

Figure 1 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 2 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 3 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 4 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Viaarxiv icon

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 03, 2022
Harrison Foley, Liam Fowl, Tom Goldstein, Gavin Taylor

Figure 1 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 2 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 3 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Figure 4 for Execute Order 66: Targeted Data Poisoning for Reinforcement Learning
Viaarxiv icon

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Add code
Bookmark button
Alert button
Oct 25, 2021
Liam Fowl, Jonas Geiping, Wojtek Czaja, Micah Goldblum, Tom Goldstein

Figure 1 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 2 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 3 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 4 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Bookmark button
Alert button
Jun 21, 2021
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon