Picture for Praneeth Netrapalli

Praneeth Netrapalli

Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games

Add code
Jun 13, 2020
Figure 1 for Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
Figure 2 for Follow the Perturbed Leader: Optimism and Fast Parallel Algorithms for Smooth Minimax Games
Viaarxiv icon

P-SIF: Document Embeddings Using Partition Averaging

Add code
May 18, 2020
Figure 1 for P-SIF: Document Embeddings Using Partition Averaging
Figure 2 for P-SIF: Document Embeddings Using Partition Averaging
Figure 3 for P-SIF: Document Embeddings Using Partition Averaging
Figure 4 for P-SIF: Document Embeddings Using Partition Averaging
Viaarxiv icon

MOReL : Model-Based Offline Reinforcement Learning

Add code
May 12, 2020
Figure 1 for MOReL : Model-Based Offline Reinforcement Learning
Figure 2 for MOReL : Model-Based Offline Reinforcement Learning
Figure 3 for MOReL : Model-Based Offline Reinforcement Learning
Figure 4 for MOReL : Model-Based Offline Reinforcement Learning
Viaarxiv icon

Efficient Domain Generalization via Common-Specific Low-Rank Decomposition

Add code
Apr 07, 2020
Figure 1 for Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
Figure 2 for Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
Figure 3 for Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
Figure 4 for Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
Viaarxiv icon

Non-Gaussianity of Stochastic Gradient Noise

Add code
Oct 25, 2019
Figure 1 for Non-Gaussianity of Stochastic Gradient Noise
Figure 2 for Non-Gaussianity of Stochastic Gradient Noise
Figure 3 for Non-Gaussianity of Stochastic Gradient Noise
Figure 4 for Non-Gaussianity of Stochastic Gradient Noise
Viaarxiv icon

Efficient Algorithms for Smooth Minimax Optimization

Add code
Jul 02, 2019
Figure 1 for Efficient Algorithms for Smooth Minimax Optimization
Figure 2 for Efficient Algorithms for Smooth Minimax Optimization
Viaarxiv icon

Making the Last Iterate of SGD Information Theoretically Optimal

Add code
May 29, 2019
Figure 1 for Making the Last Iterate of SGD Information Theoretically Optimal
Viaarxiv icon

The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure

Add code
Apr 29, 2019
Figure 1 for The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure
Figure 2 for The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure
Figure 3 for The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure
Figure 4 for The Step Decay Schedule: A Near Optimal, Geometrically Decaying Learning Rate Procedure
Viaarxiv icon

Online Non-Convex Learning: Following the Perturbed Leader is Optimal

Add code
Mar 19, 2019
Viaarxiv icon

SGD without Replacement: Sharper Rates for General Smooth Convex Functions

Add code
Mar 04, 2019
Figure 1 for SGD without Replacement: Sharper Rates for General Smooth Convex Functions
Viaarxiv icon