Picture for Peter Richtarik

Peter Richtarik

Variance-Reduced Methods for Machine Learning

Add code
Oct 02, 2020
Figure 1 for Variance-Reduced Methods for Machine Learning
Figure 2 for Variance-Reduced Methods for Machine Learning
Figure 3 for Variance-Reduced Methods for Machine Learning
Viaarxiv icon

Adaptive Learning of the Optimal Mini-Batch Size of SGD

Add code
May 03, 2020
Figure 1 for Adaptive Learning of the Optimal Mini-Batch Size of SGD
Figure 2 for Adaptive Learning of the Optimal Mini-Batch Size of SGD
Figure 3 for Adaptive Learning of the Optimal Mini-Batch Size of SGD
Figure 4 for Adaptive Learning of the Optimal Mini-Batch Size of SGD
Viaarxiv icon

Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms

Add code
Apr 03, 2020
Figure 1 for Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
Figure 2 for Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
Figure 3 for Dualize, Split, Randomize: Fast Nonsmooth Optimization Algorithms
Viaarxiv icon

From Local SGD to Local Fixed Point Methods for Federated Learning

Add code
Apr 03, 2020
Figure 1 for From Local SGD to Local Fixed Point Methods for Federated Learning
Figure 2 for From Local SGD to Local Fixed Point Methods for Federated Learning
Figure 3 for From Local SGD to Local Fixed Point Methods for Federated Learning
Figure 4 for From Local SGD to Local Fixed Point Methods for Federated Learning
Viaarxiv icon

Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems

Add code
Feb 11, 2020
Figure 1 for Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Figure 2 for Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Figure 3 for Variance Reduced Coordinate Descent with Acceleration: New Method With a Surprising Application to Finite-Sum Problems
Viaarxiv icon

Natural Compression for Distributed Deep Learning

Add code
May 27, 2019
Figure 1 for Natural Compression for Distributed Deep Learning
Figure 2 for Natural Compression for Distributed Deep Learning
Figure 3 for Natural Compression for Distributed Deep Learning
Figure 4 for Natural Compression for Distributed Deep Learning
Viaarxiv icon

SGD: General Analysis and Improved Rates

Add code
Jan 27, 2019
Figure 1 for SGD: General Analysis and Improved Rates
Figure 2 for SGD: General Analysis and Improved Rates
Figure 3 for SGD: General Analysis and Improved Rates
Viaarxiv icon

Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop

Add code
Jan 24, 2019
Figure 1 for Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Figure 2 for Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Figure 3 for Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Figure 4 for Don't Jump Through Hoops and Remove Those Loops: SVRG and Katyusha are Better Without the Outer Loop
Viaarxiv icon

SEGA: Variance Reduction via Gradient Sketching

Add code
Oct 18, 2018
Figure 1 for SEGA: Variance Reduction via Gradient Sketching
Figure 2 for SEGA: Variance Reduction via Gradient Sketching
Figure 3 for SEGA: Variance Reduction via Gradient Sketching
Figure 4 for SEGA: Variance Reduction via Gradient Sketching
Viaarxiv icon

Weighted Low-Rank Approximation of Matrices and Background Modeling

Add code
Apr 15, 2018
Figure 1 for Weighted Low-Rank Approximation of Matrices and Background Modeling
Figure 2 for Weighted Low-Rank Approximation of Matrices and Background Modeling
Figure 3 for Weighted Low-Rank Approximation of Matrices and Background Modeling
Figure 4 for Weighted Low-Rank Approximation of Matrices and Background Modeling
Viaarxiv icon