Picture for Tom Goldstein

Tom Goldstein

Comparing Human and Machine Bias in Face Recognition

Add code
Oct 25, 2021
Figure 1 for Comparing Human and Machine Bias in Face Recognition
Figure 2 for Comparing Human and Machine Bias in Face Recognition
Figure 3 for Comparing Human and Machine Bias in Face Recognition
Figure 4 for Comparing Human and Machine Bias in Face Recognition
Viaarxiv icon

Stochastic Training is Not Necessary for Generalization

Add code
Sep 29, 2021
Figure 1 for Stochastic Training is Not Necessary for Generalization
Figure 2 for Stochastic Training is Not Necessary for Generalization
Figure 3 for Stochastic Training is Not Necessary for Generalization
Figure 4 for Stochastic Training is Not Necessary for Generalization
Viaarxiv icon

Towards Transferable Adversarial Attacks on Vision Transformers

Add code
Sep 18, 2021
Figure 1 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 2 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 3 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 4 for Towards Transferable Adversarial Attacks on Vision Transformers
Viaarxiv icon

Robustness Disparities in Commercial Face Detection

Add code
Aug 27, 2021
Figure 1 for Robustness Disparities in Commercial Face Detection
Figure 2 for Robustness Disparities in Commercial Face Detection
Figure 3 for Robustness Disparities in Commercial Face Detection
Figure 4 for Robustness Disparities in Commercial Face Detection
Viaarxiv icon

Datasets for Studying Generalization from Easy to Hard Examples

Add code
Aug 13, 2021
Figure 1 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 2 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 3 for Datasets for Studying Generalization from Easy to Hard Examples
Viaarxiv icon

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Add code
Aug 03, 2021
Figure 1 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 2 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 3 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 4 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Viaarxiv icon

Long-Short Transformer: Efficient Transformers for Language and Vision

Add code
Jul 27, 2021
Figure 1 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 2 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 3 for Long-Short Transformer: Efficient Transformers for Language and Vision
Figure 4 for Long-Short Transformer: Efficient Transformers for Language and Vision
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Jun 21, 2021
Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon

MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

Add code
Jun 17, 2021
Figure 1 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 2 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 3 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 4 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Viaarxiv icon

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

Add code
Jun 16, 2021
Figure 1 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 2 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 3 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 4 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Viaarxiv icon