Alert button
Picture for Soham De

Soham De

Alert button

A study on the plasticity of neural networks

Add code
Bookmark button
Alert button
May 31, 2021
Tudor Berariu, Wojciech Czarnecki, Soham De, Jorg Bornschein, Samuel Smith, Razvan Pascanu, Claudia Clopath

Figure 1 for A study on the plasticity of neural networks
Figure 2 for A study on the plasticity of neural networks
Figure 3 for A study on the plasticity of neural networks
Figure 4 for A study on the plasticity of neural networks
Viaarxiv icon

Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error

Add code
Bookmark button
Alert button
May 27, 2021
Stanislav Fort, Andrew Brock, Razvan Pascanu, Soham De, Samuel L. Smith

Figure 1 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 2 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 3 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Figure 4 for Drawing Multiple Augmentation Samples Per Image During Training Efficiently Decreases Test Error
Viaarxiv icon

High-Performance Large-Scale Image Recognition Without Normalization

Add code
Bookmark button
Alert button
Feb 11, 2021
Andrew Brock, Soham De, Samuel L. Smith, Karen Simonyan

Figure 1 for High-Performance Large-Scale Image Recognition Without Normalization
Figure 2 for High-Performance Large-Scale Image Recognition Without Normalization
Figure 3 for High-Performance Large-Scale Image Recognition Without Normalization
Figure 4 for High-Performance Large-Scale Image Recognition Without Normalization
Viaarxiv icon

On the Origin of Implicit Regularization in Stochastic Gradient Descent

Add code
Bookmark button
Alert button
Jan 28, 2021
Samuel L. Smith, Benoit Dherin, David G. T. Barrett, Soham De

Figure 1 for On the Origin of Implicit Regularization in Stochastic Gradient Descent
Figure 2 for On the Origin of Implicit Regularization in Stochastic Gradient Descent
Figure 3 for On the Origin of Implicit Regularization in Stochastic Gradient Descent
Figure 4 for On the Origin of Implicit Regularization in Stochastic Gradient Descent
Viaarxiv icon

Characterizing signal propagation to close the performance gap in unnormalized ResNets

Add code
Bookmark button
Alert button
Jan 27, 2021
Andrew Brock, Soham De, Samuel L. Smith

Figure 1 for Characterizing signal propagation to close the performance gap in unnormalized ResNets
Figure 2 for Characterizing signal propagation to close the performance gap in unnormalized ResNets
Figure 3 for Characterizing signal propagation to close the performance gap in unnormalized ResNets
Figure 4 for Characterizing signal propagation to close the performance gap in unnormalized ResNets
Viaarxiv icon

BYOL works even without batch statistics

Add code
Bookmark button
Alert button
Oct 20, 2020
Pierre H. Richemond, Jean-Bastien Grill, Florent Altché, Corentin Tallec, Florian Strub, Andrew Brock, Samuel Smith, Soham De, Razvan Pascanu, Bilal Piot, Michal Valko

Figure 1 for BYOL works even without batch statistics
Figure 2 for BYOL works even without batch statistics
Viaarxiv icon

On the Generalization Benefit of Noise in Stochastic Gradient Descent

Add code
Bookmark button
Alert button
Jun 26, 2020
Samuel L. Smith, Erich Elsen, Soham De

Figure 1 for On the Generalization Benefit of Noise in Stochastic Gradient Descent
Figure 2 for On the Generalization Benefit of Noise in Stochastic Gradient Descent
Figure 3 for On the Generalization Benefit of Noise in Stochastic Gradient Descent
Figure 4 for On the Generalization Benefit of Noise in Stochastic Gradient Descent
Viaarxiv icon

Batch Normalization Biases Deep Residual Networks Towards Shallow Paths

Add code
Bookmark button
Alert button
Feb 24, 2020
Soham De, Samuel L. Smith

Figure 1 for Batch Normalization Biases Deep Residual Networks Towards Shallow Paths
Figure 2 for Batch Normalization Biases Deep Residual Networks Towards Shallow Paths
Figure 3 for Batch Normalization Biases Deep Residual Networks Towards Shallow Paths
Figure 4 for Batch Normalization Biases Deep Residual Networks Towards Shallow Paths
Viaarxiv icon

Adversarial Robustness through Local Linearization

Add code
Bookmark button
Alert button
Jul 04, 2019
Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy, Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

Figure 1 for Adversarial Robustness through Local Linearization
Figure 2 for Adversarial Robustness through Local Linearization
Figure 3 for Adversarial Robustness through Local Linearization
Figure 4 for Adversarial Robustness through Local Linearization
Viaarxiv icon