Picture for Micah Goldblum

Micah Goldblum

Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks

Add code
Jun 08, 2021
Figure 1 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 2 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 3 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Figure 4 for Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks
Viaarxiv icon

SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training

Add code
Jun 02, 2021
Figure 1 for SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Figure 2 for SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Figure 3 for SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Figure 4 for SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training
Viaarxiv icon

The Intrinsic Dimension of Images and Its Impact on Learning

Add code
Apr 18, 2021
Figure 1 for The Intrinsic Dimension of Images and Its Impact on Learning
Figure 2 for The Intrinsic Dimension of Images and Its Impact on Learning
Figure 3 for The Intrinsic Dimension of Images and Its Impact on Learning
Figure 4 for The Intrinsic Dimension of Images and Its Impact on Learning
Viaarxiv icon

Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems

Add code
Mar 17, 2021
Figure 1 for Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems
Figure 2 for Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems
Figure 3 for Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems
Figure 4 for Thinking Deeply with Recurrence: Generalizing from Easy to Hard Sequential Reasoning Problems
Viaarxiv icon

Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release

Add code
Mar 05, 2021
Figure 1 for Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Figure 2 for Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Figure 3 for Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Figure 4 for Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release
Viaarxiv icon

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations

Add code
Mar 02, 2021
Viaarxiv icon

What Doesn't Kill You Makes You Robust: Adversarial Training against Poisons and Backdoors

Add code
Feb 26, 2021
Viaarxiv icon

Technical Challenges for Training Fair Neural Networks

Add code
Feb 12, 2021
Figure 1 for Technical Challenges for Training Fair Neural Networks
Figure 2 for Technical Challenges for Training Fair Neural Networks
Figure 3 for Technical Challenges for Training Fair Neural Networks
Figure 4 for Technical Challenges for Training Fair Neural Networks
Viaarxiv icon

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

Add code
Jan 25, 2021
Figure 1 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 2 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 3 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 4 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Viaarxiv icon

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Add code
Dec 30, 2020
Figure 1 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 2 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 3 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 4 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Viaarxiv icon