Picture for Behnam Neyshabur

Behnam Neyshabur

Shammie

The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning

Add code
Jun 30, 2021
Figure 1 for The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning
Figure 2 for The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning
Figure 3 for The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning
Figure 4 for The Evolution of Out-of-Distribution Robustness Throughout Fine-Tuning
Viaarxiv icon

Deep Learning Through the Lens of Example Difficulty

Add code
Jun 18, 2021
Figure 1 for Deep Learning Through the Lens of Example Difficulty
Figure 2 for Deep Learning Through the Lens of Example Difficulty
Figure 3 for Deep Learning Through the Lens of Example Difficulty
Figure 4 for Deep Learning Through the Lens of Example Difficulty
Viaarxiv icon

NeurIPS 2020 Competition: Predicting Generalization in Deep Learning

Add code
Dec 14, 2020
Figure 1 for NeurIPS 2020 Competition: Predicting Generalization in Deep Learning
Viaarxiv icon

When Do Curricula Work?

Add code
Dec 05, 2020
Figure 1 for When Do Curricula Work?
Figure 2 for When Do Curricula Work?
Figure 3 for When Do Curricula Work?
Figure 4 for When Do Curricula Work?
Viaarxiv icon

Understanding the Failure Modes of Out-of-Distribution Generalization

Add code
Oct 29, 2020
Figure 1 for Understanding the Failure Modes of Out-of-Distribution Generalization
Figure 2 for Understanding the Failure Modes of Out-of-Distribution Generalization
Figure 3 for Understanding the Failure Modes of Out-of-Distribution Generalization
Figure 4 for Understanding the Failure Modes of Out-of-Distribution Generalization
Viaarxiv icon

Are wider nets better given the same number of parameters?

Add code
Oct 27, 2020
Figure 1 for Are wider nets better given the same number of parameters?
Figure 2 for Are wider nets better given the same number of parameters?
Figure 3 for Are wider nets better given the same number of parameters?
Figure 4 for Are wider nets better given the same number of parameters?
Viaarxiv icon

The Deep Bootstrap: Good Online Learners are Good Offline Generalizers

Add code
Oct 16, 2020
Figure 1 for The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Figure 2 for The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Figure 3 for The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Figure 4 for The Deep Bootstrap: Good Online Learners are Good Offline Generalizers
Viaarxiv icon

Sharpness-Aware Minimization for Efficiently Improving Generalization

Add code
Oct 03, 2020
Figure 1 for Sharpness-Aware Minimization for Efficiently Improving Generalization
Figure 2 for Sharpness-Aware Minimization for Efficiently Improving Generalization
Figure 3 for Sharpness-Aware Minimization for Efficiently Improving Generalization
Figure 4 for Sharpness-Aware Minimization for Efficiently Improving Generalization
Viaarxiv icon

Extreme Memorization via Scale of Initialization

Add code
Aug 31, 2020
Figure 1 for Extreme Memorization via Scale of Initialization
Figure 2 for Extreme Memorization via Scale of Initialization
Figure 3 for Extreme Memorization via Scale of Initialization
Figure 4 for Extreme Memorization via Scale of Initialization
Viaarxiv icon

What is being transferred in transfer learning?

Add code
Aug 26, 2020
Figure 1 for What is being transferred in transfer learning?
Figure 2 for What is being transferred in transfer learning?
Figure 3 for What is being transferred in transfer learning?
Figure 4 for What is being transferred in transfer learning?
Viaarxiv icon