Picture for Praneeth Netrapalli

Praneeth Netrapalli

Stochastic Gradient Descent Escapes Saddle Points Efficiently

Add code
Feb 13, 2019
Figure 1 for Stochastic Gradient Descent Escapes Saddle Points Efficiently
Figure 2 for Stochastic Gradient Descent Escapes Saddle Points Efficiently
Figure 3 for Stochastic Gradient Descent Escapes Saddle Points Efficiently
Figure 4 for Stochastic Gradient Descent Escapes Saddle Points Efficiently
Viaarxiv icon

A Short Note on Concentration Inequalities for Random Vectors with SubGaussian Norm

Add code
Feb 11, 2019
Viaarxiv icon

Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal

Add code
Feb 02, 2019
Figure 1 for Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal
Figure 2 for Minmax Optimization: Stable Limit Points of Gradient Descent Ascent are Locally Optimal
Viaarxiv icon

On the insufficiency of existing momentum schemes for Stochastic Optimization

Add code
Jul 31, 2018
Figure 1 for On the insufficiency of existing momentum schemes for Stochastic Optimization
Figure 2 for On the insufficiency of existing momentum schemes for Stochastic Optimization
Figure 3 for On the insufficiency of existing momentum schemes for Stochastic Optimization
Figure 4 for On the insufficiency of existing momentum schemes for Stochastic Optimization
Viaarxiv icon

Accelerating Stochastic Gradient Descent For Least Squares Regression

Add code
Jul 31, 2018
Figure 1 for Accelerating Stochastic Gradient Descent For Least Squares Regression
Figure 2 for Accelerating Stochastic Gradient Descent For Least Squares Regression
Figure 3 for Accelerating Stochastic Gradient Descent For Least Squares Regression
Figure 4 for Accelerating Stochastic Gradient Descent For Least Squares Regression
Viaarxiv icon

Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification

Add code
Jul 31, 2018
Figure 1 for Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Figure 2 for Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Figure 3 for Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Figure 4 for Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification
Viaarxiv icon

A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

Add code
Jul 21, 2018
Viaarxiv icon

Smoothed analysis for low-rank solutions to semidefinite programs in quadratic penalty form

Add code
Mar 01, 2018
Viaarxiv icon

Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent

Add code
Nov 28, 2017
Figure 1 for Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Viaarxiv icon

Leverage Score Sampling for Faster Accelerated Regression and ERM

Add code
Nov 22, 2017
Viaarxiv icon