Alert button
Picture for Theodor Misiakiewicz

Theodor Misiakiewicz

Alert button

A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator

Add code
Bookmark button
Alert button
Mar 13, 2024
Theodor Misiakiewicz, Basil Saeed

Figure 1 for A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator
Figure 2 for A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator
Figure 3 for A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator
Figure 4 for A non-asymptotic theory of Kernel Ridge Regression: deterministic equivalents, test error, and GCV estimator
Viaarxiv icon

Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime

Add code
Bookmark button
Alert button
Mar 13, 2024
Hong Hu, Yue M. Lu, Theodor Misiakiewicz

Figure 1 for Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime
Figure 2 for Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime
Figure 3 for Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime
Figure 4 for Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime
Viaarxiv icon

Six Lectures on Linearized Neural Networks

Add code
Bookmark button
Alert button
Aug 25, 2023
Theodor Misiakiewicz, Andrea Montanari

Figure 1 for Six Lectures on Linearized Neural Networks
Figure 2 for Six Lectures on Linearized Neural Networks
Figure 3 for Six Lectures on Linearized Neural Networks
Figure 4 for Six Lectures on Linearized Neural Networks
Viaarxiv icon

SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics

Add code
Bookmark button
Alert button
Feb 21, 2023
Emmanuel Abbe, Enric Boix-Adsera, Theodor Misiakiewicz

Figure 1 for SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Figure 2 for SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Figure 3 for SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Figure 4 for SGD learning on neural networks: leap complexity and saddle-to-saddle dynamics
Viaarxiv icon

Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression

Add code
Bookmark button
Alert button
Apr 21, 2022
Theodor Misiakiewicz

Figure 1 for Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Figure 2 for Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Figure 3 for Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Figure 4 for Spectrum of inner-product kernel matrices in the polynomial regime and multiple descent phenomenon in kernel ridge regression
Viaarxiv icon

The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks

Add code
Bookmark button
Alert button
Feb 17, 2022
Emmanuel Abbe, Enric Boix-Adsera, Theodor Misiakiewicz

Figure 1 for The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks
Figure 2 for The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks
Figure 3 for The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks
Figure 4 for The merged-staircase property: a necessary and nearly sufficient condition for SGD learning of sparse functions on two-layer neural networks
Viaarxiv icon

Learning with convolution and pooling operations in kernel methods

Add code
Bookmark button
Alert button
Nov 16, 2021
Theodor Misiakiewicz, Song Mei

Figure 1 for Learning with convolution and pooling operations in kernel methods
Figure 2 for Learning with convolution and pooling operations in kernel methods
Figure 3 for Learning with convolution and pooling operations in kernel methods
Figure 4 for Learning with convolution and pooling operations in kernel methods
Viaarxiv icon

Minimum complexity interpolation in random features models

Add code
Bookmark button
Alert button
Mar 30, 2021
Michael Celentano, Theodor Misiakiewicz, Andrea Montanari

Figure 1 for Minimum complexity interpolation in random features models
Viaarxiv icon

Learning with invariances in random features and kernel models

Add code
Bookmark button
Alert button
Feb 25, 2021
Song Mei, Theodor Misiakiewicz, Andrea Montanari

Figure 1 for Learning with invariances in random features and kernel models
Figure 2 for Learning with invariances in random features and kernel models
Figure 3 for Learning with invariances in random features and kernel models
Figure 4 for Learning with invariances in random features and kernel models
Viaarxiv icon

Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration

Add code
Bookmark button
Alert button
Jan 26, 2021
Song Mei, Theodor Misiakiewicz, Andrea Montanari

Figure 1 for Generalization error of random features and kernel methods: hypercontractivity and kernel matrix concentration
Viaarxiv icon