Alert button
Picture for Tom Goldstein

Tom Goldstein

Alert button

Certified Defenses for Adversarial Patches

Add code
Bookmark button
Alert button
Mar 14, 2020
Ping-Yeh Chiang, Renkun Ni, Ahmed Abdelkader, Chen Zhu, Christoph Studor, Tom Goldstein

Figure 1 for Certified Defenses for Adversarial Patches
Figure 2 for Certified Defenses for Adversarial Patches
Figure 3 for Certified Defenses for Adversarial Patches
Figure 4 for Certified Defenses for Adversarial Patches
Viaarxiv icon

Adversarial Attacks on Machine Learning Systems for High-Frequency Trading

Add code
Bookmark button
Alert button
Mar 04, 2020
Micah Goldblum, Avi Schwarzschild, Ankit B. Patel, Tom Goldstein

Figure 1 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 2 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 3 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Figure 4 for Adversarial Attacks on Machine Learning Systems for High-Frequency Trading
Viaarxiv icon

Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers

Add code
Bookmark button
Alert button
Feb 22, 2020
Chen Zhu, Renkun Ni, Ping-yeh Chiang, Hengduo Li, Furong Huang, Tom Goldstein

Figure 1 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 2 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 3 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Figure 4 for Improving the Tightness of Convex Relaxation Bounds for Training Certifiably Robust Classifiers
Viaarxiv icon

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks

Add code
Bookmark button
Alert button
Feb 17, 2020
Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein

Figure 1 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 2 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 3 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Figure 4 for Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks
Viaarxiv icon

Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness

Add code
Bookmark button
Alert button
Feb 08, 2020
Aounon Kumar, Alexander Levine, Tom Goldstein, Soheil Feizi

Figure 1 for Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Figure 2 for Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Figure 3 for Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Figure 4 for Curse of Dimensionality on Randomized Smoothing for Certifiable Robustness
Viaarxiv icon

MSE-Optimal Neural Network Initialization via Layer Fusion

Add code
Bookmark button
Alert button
Jan 28, 2020
Ramina Ghods, Andrew S. Lan, Tom Goldstein, Christoph Studer

Figure 1 for MSE-Optimal Neural Network Initialization via Layer Fusion
Figure 2 for MSE-Optimal Neural Network Initialization via Layer Fusion
Figure 3 for MSE-Optimal Neural Network Initialization via Layer Fusion
Figure 4 for MSE-Optimal Neural Network Initialization via Layer Fusion
Viaarxiv icon

WITCHcraft: Efficient PGD attacks with random step size

Add code
Bookmark button
Alert button
Nov 18, 2019
Ping-Yeh Chiang, Jonas Geiping, Micah Goldblum, Tom Goldstein, Renkun Ni, Steven Reich, Ali Shafahi

Figure 1 for WITCHcraft: Efficient PGD attacks with random step size
Figure 2 for WITCHcraft: Efficient PGD attacks with random step size
Figure 3 for WITCHcraft: Efficient PGD attacks with random step size
Figure 4 for WITCHcraft: Efficient PGD attacks with random step size
Viaarxiv icon

Certified Data Removal from Machine Learning Models

Add code
Bookmark button
Alert button
Nov 11, 2019
Chuan Guo, Tom Goldstein, Awni Hannun, Laurens van der Maaten

Figure 1 for Certified Data Removal from Machine Learning Models
Figure 2 for Certified Data Removal from Machine Learning Models
Figure 3 for Certified Data Removal from Machine Learning Models
Figure 4 for Certified Data Removal from Machine Learning Models
Viaarxiv icon

Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors

Add code
Bookmark button
Alert button
Oct 31, 2019
Zuxuan Wu, Ser-Nam Lim, Larry Davis, Tom Goldstein

Figure 1 for Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Figure 2 for Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Figure 3 for Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Figure 4 for Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Viaarxiv icon