Picture for Samuel S. Schoenholz

Samuel S. Schoenholz

Shammie

Disentangling trainability and generalization in deep learning

Add code
Dec 30, 2019
Figure 1 for Disentangling trainability and generalization in deep learning
Figure 2 for Disentangling trainability and generalization in deep learning
Figure 3 for Disentangling trainability and generalization in deep learning
Figure 4 for Disentangling trainability and generalization in deep learning
Viaarxiv icon

JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python

Add code
Dec 09, 2019
Figure 1 for JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python
Figure 2 for JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python
Figure 3 for JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python
Figure 4 for JAX, M.D.: End-to-End Differentiable, Hardware Accelerated, Molecular Dynamics in Pure Python
Viaarxiv icon

Neural Tangents: Fast and Easy Infinite Neural Networks in Python

Add code
Dec 05, 2019
Figure 1 for Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Figure 2 for Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Figure 3 for Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Figure 4 for Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Viaarxiv icon

A Mean Field Theory of Batch Normalization

Add code
Mar 05, 2019
Figure 1 for A Mean Field Theory of Batch Normalization
Figure 2 for A Mean Field Theory of Batch Normalization
Figure 3 for A Mean Field Theory of Batch Normalization
Figure 4 for A Mean Field Theory of Batch Normalization
Viaarxiv icon

Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent

Add code
Feb 18, 2019
Figure 1 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 2 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 3 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 4 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Viaarxiv icon

Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs

Add code
Jan 25, 2019
Figure 1 for Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
Figure 2 for Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
Figure 3 for Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
Figure 4 for Dynamical Isometry and a Mean Field Theory of LSTMs and GRUs
Viaarxiv icon

Adversarial Spheres

Add code
Sep 10, 2018
Figure 1 for Adversarial Spheres
Figure 2 for Adversarial Spheres
Figure 3 for Adversarial Spheres
Figure 4 for Adversarial Spheres
Viaarxiv icon

Peptide-Spectra Matching from Weak Supervision

Add code
Aug 22, 2018
Figure 1 for Peptide-Spectra Matching from Weak Supervision
Figure 2 for Peptide-Spectra Matching from Weak Supervision
Figure 3 for Peptide-Spectra Matching from Weak Supervision
Figure 4 for Peptide-Spectra Matching from Weak Supervision
Viaarxiv icon

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks

Add code
Aug 15, 2018
Figure 1 for Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Figure 2 for Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Figure 3 for Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Figure 4 for Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Viaarxiv icon

Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks

Add code
Jul 10, 2018
Figure 1 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 2 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 3 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 4 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Viaarxiv icon