Alert button
Picture for Damek Davis

Damek Davis

Alert button

Aiming towards the minimizers: fast convergence of SGD for overparametrized problems

Add code
Bookmark button
Alert button
Jun 05, 2023
Chaoyue Liu, Dmitriy Drusvyatskiy, Mikhail Belkin, Damek Davis, Yi-An Ma

Figure 1 for Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
Figure 2 for Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
Figure 3 for Aiming towards the minimizers: fast convergence of SGD for overparametrized problems
Viaarxiv icon

Asymptotic normality and optimality in nonsmooth stochastic approximation

Add code
Bookmark button
Alert button
Jan 16, 2023
Damek Davis, Dmitriy Drusvyatskiy, Liwei Jiang

Figure 1 for Asymptotic normality and optimality in nonsmooth stochastic approximation
Figure 2 for Asymptotic normality and optimality in nonsmooth stochastic approximation
Viaarxiv icon

Clustering a Mixture of Gaussians with Unknown Covariance

Add code
Bookmark button
Alert button
Oct 04, 2021
Damek Davis, Mateo Diaz, Kaizheng Wang

Figure 1 for Clustering a Mixture of Gaussians with Unknown Covariance
Figure 2 for Clustering a Mixture of Gaussians with Unknown Covariance
Figure 3 for Clustering a Mixture of Gaussians with Unknown Covariance
Viaarxiv icon

Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality

Add code
Bookmark button
Alert button
Aug 26, 2021
Damek Davis, Dmitriy Drusvyatskiy, Liwei Jiang

Figure 1 for Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality
Figure 2 for Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality
Figure 3 for Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality
Figure 4 for Subgradient methods near active manifolds: saddle point avoidance, local convergence, and asymptotic normality
Viaarxiv icon

Escaping strict saddle points of the Moreau envelope in nonsmooth optimization

Add code
Bookmark button
Alert button
Jun 17, 2021
Damek Davis, Mateo Díaz, Dmitriy Drusvyatskiy

Figure 1 for Escaping strict saddle points of the Moreau envelope in nonsmooth optimization
Figure 2 for Escaping strict saddle points of the Moreau envelope in nonsmooth optimization
Figure 3 for Escaping strict saddle points of the Moreau envelope in nonsmooth optimization
Viaarxiv icon

Active strict saddles in nonsmooth optimization

Add code
Bookmark button
Alert button
Dec 16, 2019
Damek Davis, Dmitriy Drusvyatskiy

Figure 1 for Active strict saddles in nonsmooth optimization
Figure 2 for Active strict saddles in nonsmooth optimization
Figure 3 for Active strict saddles in nonsmooth optimization
Viaarxiv icon

Robust stochastic optimization with the proximal point method

Add code
Bookmark button
Alert button
Aug 01, 2019
Damek Davis, Dmitriy Drusvyatskiy

Figure 1 for Robust stochastic optimization with the proximal point method
Figure 2 for Robust stochastic optimization with the proximal point method
Figure 3 for Robust stochastic optimization with the proximal point method
Viaarxiv icon

Stochastic algorithms with geometric step decay converge linearly on sharp functions

Add code
Bookmark button
Alert button
Jul 22, 2019
Damek Davis, Dmitriy Drusvyatskiy, Vasileios Charisopoulos

Figure 1 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 2 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 3 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Figure 4 for Stochastic algorithms with geometric step decay converge linearly on sharp functions
Viaarxiv icon

Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence

Add code
Bookmark button
Alert button
Apr 22, 2019
Vasileios Charisopoulos, Yudong Chen, Damek Davis, Mateo Díaz, Lijun Ding, Dmitriy Drusvyatskiy

Figure 1 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 2 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 3 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Figure 4 for Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence
Viaarxiv icon