Alert button
Picture for Tom Goldstein

Tom Goldstein

Alert button

Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training

May 30, 2020
Zheng Xu, Ali Shafahi, Tom Goldstein

Figure 1 for Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training
Figure 2 for Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training
Figure 3 for Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training
Figure 4 for Exploring Model Robustness with Adaptive Networks and Improved Adversarial Training
Viaarxiv icon

Headless Horseman: Adversarial Attacks on Transfer Learning Models

Apr 20, 2020
Ahmed Abdelkader, Michael J. Curry, Liam Fowl, Tom Goldstein, Avi Schwarzschild, Manli Shu, Christoph Studer, Chen Zhu

Figure 1 for Headless Horseman: Adversarial Attacks on Transfer Learning Models
Figure 2 for Headless Horseman: Adversarial Attacks on Transfer Learning Models
Figure 3 for Headless Horseman: Adversarial Attacks on Transfer Learning Models
Figure 4 for Headless Horseman: Adversarial Attacks on Transfer Learning Models
Viaarxiv icon

MetaPoison: Practical General-purpose Clean-label Data Poisoning

Apr 01, 2020
W. Ronny Huang, Jonas Geiping, Liam Fowl, Gavin Taylor, Tom Goldstein

Figure 1 for MetaPoison: Practical General-purpose Clean-label Data Poisoning
Figure 2 for MetaPoison: Practical General-purpose Clean-label Data Poisoning
Figure 3 for MetaPoison: Practical General-purpose Clean-label Data Poisoning
Figure 4 for MetaPoison: Practical General-purpose Clean-label Data Poisoning
Viaarxiv icon

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks

Mar 21, 2020
Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein

Figure 1 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 2 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 3 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 4 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Viaarxiv icon

Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates

Mar 19, 2020
Amin Ghiasi, Ali Shafahi, Tom Goldstein

Figure 1 for Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Figure 2 for Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Figure 3 for Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Figure 4 for Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Viaarxiv icon

Certified Defenses for Adversarial Patches

Mar 14, 2020
Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studor, Tom Goldstein

Figure 1 for Certified Defenses for Adversarial Patches
Figure 2 for Certified Defenses for Adversarial Patches
Figure 3 for Certified Defenses for Adversarial Patches
Figure 4 for Certified Defenses for Adversarial Patches
Viaarxiv icon

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading

Mar 04, 2020
Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein

Figure 1 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 2 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 3 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 4 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Viaarxiv icon

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Feb 22, 2020
Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Figure 1 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 2 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 3 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 4 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Viaarxiv icon