Alert button
Picture for Mihai Nica

Mihai Nica

Alert button

Bandit-Driven Batch Selection for Robust Learning under Label Noise

Oct 31, 2023
Michal Lisicki, Mihai Nica, Graham W. Taylor

Viaarxiv icon

Differential Equation Scaling Limits of Shaped and Unshaped Neural Networks

Oct 18, 2023
Mufan Bill Li, Mihai Nica

Viaarxiv icon

Diffusion on the Probability Simplex

Sep 12, 2023
Griffin Floto, Thorsteinn Jonsson, Mihai Nica, Scott Sanner, Eric Zhengyu Zhu

Figure 1 for Diffusion on the Probability Simplex
Figure 2 for Diffusion on the Probability Simplex
Figure 3 for Diffusion on the Probability Simplex
Viaarxiv icon

Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions

Jun 02, 2023
Cameron Jakub, Mihai Nica

Figure 1 for Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Figure 2 for Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Figure 3 for Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Figure 4 for Network Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Viaarxiv icon

Dynamic Sparse Training with Structured Sparsity

May 03, 2023
Mike Lasby, Anna Golubeva, Utku Evci, Mihai Nica, Yani Ioannou

Figure 1 for Dynamic Sparse Training with Structured Sparsity
Figure 2 for Dynamic Sparse Training with Structured Sparsity
Figure 3 for Dynamic Sparse Training with Structured Sparsity
Figure 4 for Dynamic Sparse Training with Structured Sparsity
Viaarxiv icon

Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization

Feb 20, 2023
Cameron Jakub, Mihai Nica

Figure 1 for Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization
Figure 2 for Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization
Figure 3 for Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization
Figure 4 for Depth Degeneracy in Neural Networks: Vanishing Angles in Fully Connected ReLU Networks on Initialization
Viaarxiv icon

Bounding generalization error with input compression: An empirical study with infinite-width networks

Jul 19, 2022
Angus Galloway, Anna Golubeva, Mahmoud Salem, Mihai Nica, Yani Ioannou, Graham W. Taylor

Figure 1 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 2 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 3 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Figure 4 for Bounding generalization error with input compression: An empirical study with infinite-width networks
Viaarxiv icon

The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization

Jun 06, 2022
Mufan Bill Li, Mihai Nica, Daniel M. Roy

Figure 1 for The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Figure 2 for The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Figure 3 for The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Figure 4 for The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization
Viaarxiv icon

Exponentially Tilted Gaussian Prior for Variational Autoencoder

Nov 30, 2021
Griffin Floto, Stefan Kremer, Mihai Nica

Figure 1 for Exponentially Tilted Gaussian Prior for Variational Autoencoder
Figure 2 for Exponentially Tilted Gaussian Prior for Variational Autoencoder
Figure 3 for Exponentially Tilted Gaussian Prior for Variational Autoencoder
Figure 4 for Exponentially Tilted Gaussian Prior for Variational Autoencoder
Viaarxiv icon