Picture for Satyen Kale

Satyen Kale

Google AI

Stacking as Accelerated Gradient Descent

Add code
Mar 08, 2024
Viaarxiv icon

Efficient Stagewise Pretraining via Progressive Subnetworks

Add code
Feb 08, 2024
Viaarxiv icon

Asynchronous Local-SGD Training for Language Modeling

Add code
Jan 17, 2024
Viaarxiv icon

Improved Differentially Private and Lazy Online Convex Optimization

Add code
Dec 20, 2023
Viaarxiv icon

On the Convergence of Federated Averaging with Cyclic Client Participation

Add code
Feb 06, 2023
Viaarxiv icon

From Gradient Flow on Population Loss to Learning with Stochastic Gradient Descent

Add code
Oct 13, 2022
Viaarxiv icon

Private Matrix Approximation and Geometry of Unitary Orbits

Add code
Jul 06, 2022
Viaarxiv icon

Beyond Uniform Lipschitz Condition in Differentially Private Optimization

Add code
Jun 21, 2022
Figure 1 for Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Figure 2 for Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Figure 3 for Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Figure 4 for Beyond Uniform Lipschitz Condition in Differentially Private Optimization
Viaarxiv icon

On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data

Add code
Jun 09, 2022
Figure 1 for On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data
Figure 2 for On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data
Figure 3 for On the Unreasonable Effectiveness of Federated Averaging with Heterogeneous Data
Viaarxiv icon

Self-Consistency of the Fokker-Planck Equation

Add code
Jun 02, 2022
Figure 1 for Self-Consistency of the Fokker-Planck Equation
Figure 2 for Self-Consistency of the Fokker-Planck Equation
Figure 3 for Self-Consistency of the Fokker-Planck Equation
Viaarxiv icon