Picture for Micah Goldblum

Micah Goldblum

Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models

Add code
Jan 29, 2022
Figure 1 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 2 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 3 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Figure 4 for Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models
Viaarxiv icon

Active Learning at the ImageNet Scale

Add code
Nov 25, 2021
Figure 1 for Active Learning at the ImageNet Scale
Figure 2 for Active Learning at the ImageNet Scale
Figure 3 for Active Learning at the ImageNet Scale
Figure 4 for Active Learning at the ImageNet Scale
Viaarxiv icon

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models

Add code
Oct 25, 2021
Figure 1 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 2 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 3 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Figure 4 for Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models
Viaarxiv icon

Comparing Human and Machine Bias in Face Recognition

Add code
Oct 25, 2021
Figure 1 for Comparing Human and Machine Bias in Face Recognition
Figure 2 for Comparing Human and Machine Bias in Face Recognition
Figure 3 for Comparing Human and Machine Bias in Face Recognition
Figure 4 for Comparing Human and Machine Bias in Face Recognition
Viaarxiv icon

Identification of Attack-Specific Signatures in Adversarial Examples

Add code
Oct 13, 2021
Figure 1 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 2 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 3 for Identification of Attack-Specific Signatures in Adversarial Examples
Figure 4 for Identification of Attack-Specific Signatures in Adversarial Examples
Viaarxiv icon

Stochastic Training is Not Necessary for Generalization

Add code
Sep 29, 2021
Figure 1 for Stochastic Training is Not Necessary for Generalization
Figure 2 for Stochastic Training is Not Necessary for Generalization
Figure 3 for Stochastic Training is Not Necessary for Generalization
Figure 4 for Stochastic Training is Not Necessary for Generalization
Viaarxiv icon

Towards Transferable Adversarial Attacks on Vision Transformers

Add code
Sep 18, 2021
Figure 1 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 2 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 3 for Towards Transferable Adversarial Attacks on Vision Transformers
Figure 4 for Towards Transferable Adversarial Attacks on Vision Transformers
Viaarxiv icon

Datasets for Studying Generalization from Easy to Hard Examples

Add code
Aug 13, 2021
Figure 1 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 2 for Datasets for Studying Generalization from Easy to Hard Examples
Figure 3 for Datasets for Studying Generalization from Easy to Hard Examples
Viaarxiv icon

Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability

Add code
Aug 03, 2021
Figure 1 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 2 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 3 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Figure 4 for Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Viaarxiv icon

Adversarial Examples Make Strong Poisons

Add code
Jun 21, 2021
Figure 1 for Adversarial Examples Make Strong Poisons
Figure 2 for Adversarial Examples Make Strong Poisons
Figure 3 for Adversarial Examples Make Strong Poisons
Figure 4 for Adversarial Examples Make Strong Poisons
Viaarxiv icon