Picture for Micah Goldblum

Micah Goldblum

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Add code
Oct 25, 2021
Figure 1 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 2 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 3 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 4 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Viaarxiv icon

Comparing Human and Machine Bias in Face Recognition

Add code
Oct 25, 2021
Figure 1 for Comparing Human and Machine Bias in Face Recognition
Figure 2 for Comparing Human and Machine Bias in Face Recognition
Figure 3 for Comparing Human and Machine Bias in Face Recognition
Figure 4 for Comparing Human and Machine Bias in Face Recognition
Viaarxiv icon

Identification of Attack-Specific Signatures in Adversarial Examples

Add code
Oct 13, 2021
Figure 1 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 2 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 3 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 4 for Identification of Attack-Specific Signatures in Adversarial Examples
Viaarxiv icon

Stochastic Training is Not Necessary for Generalization

Add code
Sep 29, 2021
Figure 1 for Stochastic Training is Not Necessary for Generalization
Figure 2 for Stochastic Training is Not Necessary for Generalization
Figure 3 for Stochastic Training is Not Necessary for Generalization
Figure 4 for Stochastic Training is Not Necessary for Generalization
Viaarxiv icon

Towards Transferable Adversarial Attacks on Vision Transformers

Add code
Sep 18, 2021
Figure 1 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 2 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 3 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 4 for Towards Transferable Adversarial Attacks on Vision Transformers
Viaarxiv icon

Datasets for Studying Generalization from Easy to Hard Examples

Add code
Aug 13, 2021
Figure 1 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 2 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 3 for Datasets for Studying Generalization from Easy to Hard Examples
Viaarxiv icon

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Add code
Aug 03, 2021
Figure 1 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 2 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 3 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 4 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Jun 21, 2021
Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon

MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data

Add code
Jun 17, 2021
Figure 1 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 2 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 3 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Figure 4 for MetaBalance: High-Performance Neural Networks for Class-Imbalanced Data
Viaarxiv icon

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch

Add code
Jun 16, 2021
Figure 1 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 2 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 3 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Figure 4 for Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Viaarxiv icon