Alert button
Picture for Lenaic Chizat

Lenaic Chizat

Alert button

EPFL

On the symmetries in the dynamics of wide two-layer neural networks

Add code
Bookmark button
Alert button
Dec 06, 2022
Karl Hajjar, Lenaic Chizat

Figure 1 for On the symmetries in the dynamics of wide two-layer neural networks
Figure 2 for On the symmetries in the dynamics of wide two-layer neural networks
Figure 3 for On the symmetries in the dynamics of wide two-layer neural networks
Viaarxiv icon

Faster Wasserstein Distance Estimation with the Sinkhorn Divergence

Add code
Bookmark button
Alert button
Jun 15, 2020
Lenaic Chizat, Pierre Roussillon, Flavien Léger, François-Xavier Vialard, Gabriel Peyré

Figure 1 for Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
Figure 2 for Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
Figure 3 for Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
Figure 4 for Faster Wasserstein Distance Estimation with the Sinkhorn Divergence
Viaarxiv icon

Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss

Add code
Bookmark button
Alert button
Mar 04, 2020
Lenaic Chizat, Francis Bach

Figure 1 for Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Figure 2 for Implicit Bias of Gradient Descent for Wide Two-layer Neural Networks Trained with the Logistic Loss
Viaarxiv icon

Sparse Optimization on Measures with Over-parameterized Gradient Descent

Add code
Bookmark button
Alert button
Jul 24, 2019
Lenaic Chizat

Figure 1 for Sparse Optimization on Measures with Over-parameterized Gradient Descent
Figure 2 for Sparse Optimization on Measures with Over-parameterized Gradient Descent
Figure 3 for Sparse Optimization on Measures with Over-parameterized Gradient Descent
Figure 4 for Sparse Optimization on Measures with Over-parameterized Gradient Descent
Viaarxiv icon

A Note on Lazy Training in Supervised Differentiable Programming

Add code
Bookmark button
Alert button
Dec 19, 2018
Lenaic Chizat, Francis Bach

Figure 1 for A Note on Lazy Training in Supervised Differentiable Programming
Figure 2 for A Note on Lazy Training in Supervised Differentiable Programming
Figure 3 for A Note on Lazy Training in Supervised Differentiable Programming
Figure 4 for A Note on Lazy Training in Supervised Differentiable Programming
Viaarxiv icon

On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport

Add code
Bookmark button
Alert button
Oct 29, 2018
Lenaic Chizat, Francis Bach

Figure 1 for On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Figure 2 for On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Figure 3 for On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Figure 4 for On the Global Convergence of Gradient Descent for Over-parameterized Models using Optimal Transport
Viaarxiv icon