Alert button
Picture for Yasaman Bahri

Yasaman Bahri

Alert button

Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent

Add code
Bookmark button
Alert button
Feb 18, 2019
Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Jascha Sohl-Dickstein, Jeffrey Pennington

Figure 1 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 2 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 3 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Figure 4 for Wide Neural Networks of Any Depth Evolve as Linear Models Under Gradient Descent
Viaarxiv icon

Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes

Add code
Bookmark button
Alert button
Oct 11, 2018
Roman Novak, Lechao Xiao, Jaehoon Lee, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Figure 1 for Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes
Figure 2 for Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes
Figure 3 for Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes
Figure 4 for Bayesian Convolutional Neural Networks with Many Channels are Gaussian Processes
Viaarxiv icon

Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks

Add code
Bookmark button
Alert button
Jul 10, 2018
Lechao Xiao, Yasaman Bahri, Jascha Sohl-Dickstein, Samuel S. Schoenholz, Jeffrey Pennington

Figure 1 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 2 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 3 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Figure 4 for Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Viaarxiv icon

Sensitivity and Generalization in Neural Networks: an Empirical Study

Add code
Bookmark button
Alert button
Jun 18, 2018
Roman Novak, Yasaman Bahri, Daniel A. Abolafia, Jeffrey Pennington, Jascha Sohl-Dickstein

Figure 1 for Sensitivity and Generalization in Neural Networks: an Empirical Study
Figure 2 for Sensitivity and Generalization in Neural Networks: an Empirical Study
Figure 3 for Sensitivity and Generalization in Neural Networks: an Empirical Study
Figure 4 for Sensitivity and Generalization in Neural Networks: an Empirical Study
Viaarxiv icon

Deep Neural Networks as Gaussian Processes

Add code
Bookmark button
Alert button
Mar 03, 2018
Jaehoon Lee, Yasaman Bahri, Roman Novak, Samuel S. Schoenholz, Jeffrey Pennington, Jascha Sohl-Dickstein

Figure 1 for Deep Neural Networks as Gaussian Processes
Figure 2 for Deep Neural Networks as Gaussian Processes
Figure 3 for Deep Neural Networks as Gaussian Processes
Figure 4 for Deep Neural Networks as Gaussian Processes
Viaarxiv icon