Alert button
Picture for Cengiz Pehlevan

Cengiz Pehlevan

Alert button

Depthwise Hyperparameter Transfer in Residual Networks: Dynamics and Scaling Limit

Sep 28, 2023
Blake Bordelon, Lorenzo Noci, Mufan Bill Li, Boris Hanin, Cengiz Pehlevan

The cost of hyperparameter tuning in deep learning has been rising with model sizes, prompting practitioners to find new tuning methods using a proxy of smaller networks. One such proposal uses $\mu$P parameterized networks, where the optimal hyperparameters for small width networks transfer to networks with arbitrarily large width. However, in this scheme, hyperparameters do not transfer across depths. As a remedy, we study residual networks with a residual branch scale of $1/\sqrt{\text{depth}}$ in combination with the $\mu$P parameterization. We provide experiments demonstrating that residual architectures including convolutional ResNets and Vision Transformers trained with this parameterization exhibit transfer of optimal hyperparameters across width and depth on CIFAR-10 and ImageNet. Furthermore, our empirical findings are supported and motivated by theory. Using recent developments in the dynamical mean field theory (DMFT) description of neural network learning dynamics, we show that this parameterization of ResNets admits a well-defined feature learning joint infinite-width and infinite-depth limit and show convergence of finite-size network dynamics towards this limit.

Viaarxiv icon

Dynamics of Temporal Difference Reinforcement Learning

Jul 10, 2023
Blake Bordelon, Paul Masset, Henry Kuo, Cengiz Pehlevan

Figure 1 for Dynamics of Temporal Difference Reinforcement Learning
Figure 2 for Dynamics of Temporal Difference Reinforcement Learning
Figure 3 for Dynamics of Temporal Difference Reinforcement Learning
Figure 4 for Dynamics of Temporal Difference Reinforcement Learning

Reinforcement learning has been successful across several applications in which agents have to learn to act in environments with sparse feedback. However, despite this empirical success there is still a lack of theoretical understanding of how the parameters of reinforcement learning models and the features used to represent states interact to control the dynamics of learning. In this work, we use concepts from statistical physics, to study the typical case learning curves for temporal difference learning of a value function with linear function approximators. Our theory is derived under a Gaussian equivalence hypothesis where averages over the random trajectories are replaced with temporally correlated Gaussian feature averages and we validate our assumptions on small scale Markov Decision Processes. We find that the stochastic semi-gradient noise due to subsampling the space of possible episodes leads to significant plateaus in the value error, unlike in traditional gradient descent dynamics. We study how learning dynamics and plateaus depend on feature structure, learning rate, discount factor, and reward function. We then analyze how strategies like learning rate annealing and reward shaping can favorably alter learning dynamics and plateaus. To conclude, our work introduces new tools to open a new direction towards developing a theory of learning dynamics in reinforcement learning.

Viaarxiv icon

Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles

Jul 06, 2023
Benjamin S. Ruben, Cengiz Pehlevan

Figure 1 for Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles
Figure 2 for Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles
Figure 3 for Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles
Figure 4 for Learning Curves for Heterogeneous Feature-Subsampled Ridge Ensembles

Feature bagging is a well-established ensembling method which aims to reduce prediction variance by training estimators in an ensemble on random subsamples or projections of features. Typically, ensembles are chosen to be homogeneous, in the sense the the number of feature dimensions available to an estimator is uniform across the ensemble. Here, we introduce heterogeneous feature ensembling, with estimators built on varying number of feature dimensions, and consider its performance in a linear regression setting. We study an ensemble of linear predictors, each fit using ridge regression on a subset of the available features. We allow the number of features included in these subsets to vary. Using the replica trick from statistical physics, we derive learning curves for ridge ensembles with deterministic linear masks. We obtain explicit expressions for the learning curves in the case of equicorrelated data with an isotropic feature noise. Using the derived expressions, we investigate the effect of subsampling and ensembling, finding sharp transitions in the optimal ensembling strategy in the parameter space of noise level, data correlations, and data-task alignment. Finally, we suggest variable-dimension feature bagging as a strategy to mitigate double descent for robust machine learning in practice.

Viaarxiv icon

Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry

Jun 09, 2023
Bariscan Bozkurt, Cengiz Pehlevan, Alper T Erdogan

Figure 1 for Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
Figure 2 for Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
Figure 3 for Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
Figure 4 for Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry

The backpropagation algorithm has experienced remarkable success in training large-scale artificial neural networks, however, its biological-plausibility is disputed, and it remains an open question whether the brain employs supervised learning mechanisms akin to it. Here, we propose correlative information maximization between layer activations as an alternative normative approach to describe the signal propagation in biological neural networks in both forward and backward directions. This new framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm. The coordinate descent-based optimization of the corresponding objective, combined with the mean square error loss function for fitting labeled supervision data, gives rise to a neural network structure that emulates a more biologically realistic network of multi-compartment pyramidal neurons with dendritic processing and lateral inhibitory neurons. Furthermore, our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths, a significant critique against the plausibility of the conventional backpropagation algorithm. This is achieved by leveraging two alternative, yet equivalent forms of the correlative mutual information objective. These alternatives intrinsically lead to forward and backward prediction networks without weight symmetry issues, providing a compelling solution to this long-standing challenge.

* Preprint, 31 pages 
Viaarxiv icon

Long Sequence Hopfield Memory

Jun 07, 2023
Hamza Tahir Chaudhry, Jacob A. Zavatone-Veth, Dmitry Krotov, Cengiz Pehlevan

Figure 1 for Long Sequence Hopfield Memory
Figure 2 for Long Sequence Hopfield Memory
Figure 3 for Long Sequence Hopfield Memory
Figure 4 for Long Sequence Hopfield Memory

Sequence memory is an essential attribute of natural and artificial intelligence that enables agents to encode, store, and retrieve complex sequences of stimuli and actions. Computational models of sequence memory have been proposed where recurrent Hopfield-like neural networks are trained with temporally asymmetric Hebbian rules. However, these networks suffer from limited sequence capacity (maximal length of the stored sequence) due to interference between the memories. Inspired by recent work on Dense Associative Memories, we expand the sequence capacity of these models by introducing a nonlinear interaction term, enhancing separation between the patterns. We derive novel scaling laws for sequence capacity with respect to network size, significantly outperforming existing scaling laws for models based on traditional Hopfield networks, and verify these theoretical results with numerical simulation. Moreover, we introduce a generalized pseudoinverse rule to recall sequences of highly correlated patterns. Finally, we extend this model to store sequences with variable timing between states' transitions and describe a biologically-plausible implementation, with connections to motor neuroscience.

* 14+21 pages, 10+1 figures 
Viaarxiv icon

Feature-Learning Networks Are Consistent Across Widths At Realistic Scales

May 28, 2023
Nikhil Vyas, Alexander Atanasov, Blake Bordelon, Depen Morwani, Sabarish Sainathan, Cengiz Pehlevan

Figure 1 for Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Figure 2 for Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Figure 3 for Feature-Learning Networks Are Consistent Across Widths At Realistic Scales
Figure 4 for Feature-Learning Networks Are Consistent Across Widths At Realistic Scales

We study the effect of width on the dynamics of feature-learning neural networks across a variety of architectures and datasets. Early in training, wide neural networks trained on online data have not only identical loss curves but also agree in their point-wise test predictions throughout training. For simple tasks such as CIFAR-5m this holds throughout training for networks of realistic widths. We also show that structural properties of the models, including internal representations, preactivation distributions, edge of stability phenomena, and large learning rate effects are consistent across large widths. This motivates the hypothesis that phenomena seen in realistic models can be captured by infinite-width, feature-learning limits. For harder tasks (such as ImageNet and language modeling), and later training times, finite-width deviations grow systematically. Two distinct effects cause these deviations across widths. First, the network output has initialization-dependent variance scaling inversely with width, which can be removed by ensembling networks. We observe, however, that ensembles of narrower networks perform worse than a single wide network. We call this the bias of narrower width. We conclude with a spectral perspective on the origin of this finite-width bias.

Viaarxiv icon

Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks

Apr 06, 2023
Blake Bordelon, Cengiz Pehlevan

Figure 1 for Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Figure 2 for Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Figure 3 for Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks
Figure 4 for Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks

We analyze the dynamics of finite width effects in wide but finite feature learning neural networks. Unlike many prior analyses, our results, while perturbative in width, are non-perturbative in the strength of feature learning. Starting from a dynamical mean field theory (DMFT) description of infinite width deep neural network kernel and prediction dynamics, we provide a characterization of the $\mathcal{O}(1/\sqrt{\text{width}})$ fluctuations of the DMFT order parameters over random initialization of the network weights. In the lazy limit of network training, all kernels are random but static in time and the prediction variance has a universal form. However, in the rich, feature learning regime, the fluctuations of the kernels and predictions are dynamically coupled with variance that can be computed self-consistently. In two layer networks, we show how feature learning can dynamically reduce the variance of the final NTK and final network predictions. We also show how initialization variance can slow down online learning in wide but finite networks. In deeper networks, kernel variance can dramatically accumulate through subsequent layers at large feature learning strengths, but feature learning continues to improve the SNR of the feature kernels. In discrete time, we demonstrate that large learning rate phenomena such as edge of stability effects can be well captured by infinite width dynamics and that initialization variance can decrease dynamically. For CNNs trained on CIFAR-10, we empirically find significant corrections to both the bias and variance of network dynamics due to finite width.

* 40 Pages 
Viaarxiv icon

Learning curves for deep structured Gaussian feature models

Mar 01, 2023
Jacob A. Zavatone-Veth, Cengiz Pehlevan

Figure 1 for Learning curves for deep structured Gaussian feature models
Figure 2 for Learning curves for deep structured Gaussian feature models
Figure 3 for Learning curves for deep structured Gaussian feature models

In recent years, significant attention in deep learning theory has been devoted to analyzing the generalization performance of models with multiple layers of Gaussian random features. However, few works have considered the effect of feature anisotropy; most assume that features are generated using independent and identically distributed Gaussian weights. Here, we derive learning curves for models with many layers of structured Gaussian features. We show that allowing correlations between the rows of the first layer of features can aid generalization, while structure in later layers is generally detrimental. Our results shed light on how weight structure affects generalization in a simple class of solvable models.

* 9+12 pages, 3 figures 
Viaarxiv icon

Neural networks learn to magnify areas near decision boundaries

Jan 26, 2023
Jacob A. Zavatone-Veth, Sheng Yang, Julian A. Rubinfien, Cengiz Pehlevan

Figure 1 for Neural networks learn to magnify areas near decision boundaries
Figure 2 for Neural networks learn to magnify areas near decision boundaries
Figure 3 for Neural networks learn to magnify areas near decision boundaries
Figure 4 for Neural networks learn to magnify areas near decision boundaries

We study how training molds the Riemannian geometry induced by neural network feature maps. At infinite width, neural networks with random parameters induce highly symmetric metrics on input space. Feature learning in networks trained to perform classification tasks magnifies local areas along decision boundaries. These changes are consistent with previously proposed geometric approaches for hand-tuning of kernel methods to improve generalization.

* 53 pages, many figures 
Viaarxiv icon

The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes

Dec 23, 2022
Alexander Atanasov, Blake Bordelon, Sabarish Sainathan, Cengiz Pehlevan

Figure 1 for The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes
Figure 2 for The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes
Figure 3 for The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes
Figure 4 for The Onset of Variance-Limited Behavior for Networks in the Lazy and Rich Regimes

For small training set sizes $P$, the generalization error of wide neural networks is well-approximated by the error of an infinite width neural network (NN), either in the kernel or mean-field/feature-learning regime. However, after a critical sample size $P^*$, we empirically find the finite-width network generalization becomes worse than that of the infinite width network. In this work, we empirically study the transition from infinite-width behavior to this variance limited regime as a function of sample size $P$ and network width $N$. We find that finite-size effects can become relevant for very small dataset sizes on the order of $P^* \sim \sqrt{N}$ for polynomial regression with ReLU networks. We discuss the source of these effects using an argument based on the variance of the NN's final neural tangent kernel (NTK). This transition can be pushed to larger $P$ by enhancing feature learning or by ensemble averaging the networks. We find that the learning curve for regression with the final NTK is an accurate approximation of the NN learning curve. Using this, we provide a toy model which also exhibits $P^* \sim \sqrt{N}$ scaling and has $P$-dependent benefits from feature learning.

* 34 pages, 19 figures 
Viaarxiv icon