Alert button
Picture for Tom Goldstein

Tom Goldstein

Alert button

Towards Transferable Adversarial Attacks on Vision Transformers

Sep 18, 2021
Zhipeng Wei, Jingjing Chen, Micah Goldblum, Zuxuan Wu, Tom Goldstein, Yu-Gang Jiang

Figure 1 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 2 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 3 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 4 for Towards Transferable Adversarial Attacks on Vision Transformers
Viaarxiv icon

Robustness Disparities in Commercial Face Detection

Aug 27, 2021
Samuel Dooley, Tom Goldstein, John P. Dickerson

Figure 1 for Robustness Disparities in Commercial Face Detection
Figure 2 for Robustness Disparities in Commercial Face Detection
Figure 3 for Robustness Disparities in Commercial Face Detection
Figure 4 for Robustness Disparities in Commercial Face Detection
Viaarxiv icon

Datasets for Studying Generalization from Easy to Hard Examples

Aug 13, 2021
Avi Schwarzschild, Eitan Borgnia, Arjun Gupta, Arpit Bansal, Zeyad Emam, Furong Huang, Micah Goldblum, Tom Goldstein

Figure 1 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 2 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 3 for Datasets for Studying Generalization from Easy to Hard Examples
Viaarxiv icon

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Aug 03, 2021
Roman Levin, Manli Shu, Eitan Borgnia, Furong Huang, Micah Goldblum, Tom Goldstein

Figure 1 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 2 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 3 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 4 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Viaarxiv icon

Long-Short Transformer: Efficient Transformers for Language and Vision

Jul 27, 2021
Chen Zhu, Wei Ping, Chaowei Xiao, Mohammad Shoeybi, Tom Goldstein, Anima Anandkumar, Bryan Catanzaro

Figure 1 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 2 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 3 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 4 for Long-Short Transformer: Efficient Transformers for Language and Vision
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Jun 21, 2021
Liam Fowl, Micah Goldblum, Ping-yeh Chiang, Jonas Geiping, Wojtek Czaja, Tom Goldstein

Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon

MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

Jun 17, 2021
Arpit Bansal, Micah Goldblum, Valeriia Cherepanova, Avi Schwarzschild, C. Bayan Bruss, Tom Goldstein

Figure 1 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 2 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 3 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 4 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Viaarxiv icon

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

Jun 16, 2021
Hossein Souri, Micah Goldblum, Liam Fowl, Rama Chellappa, Tom Goldstein

Figure 1 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 2 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 3 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 4 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Viaarxiv icon