Picture for Mehrdad Mahdavi

Mehrdad Mahdavi

Michigan State University

Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency

Add code
Feb 25, 2021
Figure 1 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Figure 2 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Figure 3 for Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency
Viaarxiv icon

Distributionally Robust Federated Averaging

Add code
Feb 25, 2021
Figure 1 for Distributionally Robust Federated Averaging
Figure 2 for Distributionally Robust Federated Averaging
Figure 3 for Distributionally Robust Federated Averaging
Viaarxiv icon

Communication-efficient k-Means for Edge-based Machine Learning

Add code
Feb 08, 2021
Figure 1 for Communication-efficient k-Means for Edge-based Machine Learning
Figure 2 for Communication-efficient k-Means for Edge-based Machine Learning
Figure 3 for Communication-efficient k-Means for Edge-based Machine Learning
Figure 4 for Communication-efficient k-Means for Edge-based Machine Learning
Viaarxiv icon

Online Structured Meta-learning

Add code
Oct 22, 2020
Figure 1 for Online Structured Meta-learning
Figure 2 for Online Structured Meta-learning
Figure 3 for Online Structured Meta-learning
Figure 4 for Online Structured Meta-learning
Viaarxiv icon

Federated Learning with Compression: Unified Analysis and Sharp Guarantees

Add code
Jul 02, 2020
Figure 1 for Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Figure 2 for Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Figure 3 for Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Figure 4 for Federated Learning with Compression: Unified Analysis and Sharp Guarantees
Viaarxiv icon

Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks

Add code
Jun 24, 2020
Figure 1 for Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Figure 2 for Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Figure 3 for Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Figure 4 for Minimal Variance Sampling with Provable Guarantees for Fast Training of Graph Neural Networks
Viaarxiv icon

Adaptive Personalized Federated Learning

Add code
Mar 30, 2020
Figure 1 for Adaptive Personalized Federated Learning
Figure 2 for Adaptive Personalized Federated Learning
Figure 3 for Adaptive Personalized Federated Learning
Figure 4 for Adaptive Personalized Federated Learning
Viaarxiv icon

On the Convergence of Local Descent Methods in Federated Learning

Add code
Dec 06, 2019
Figure 1 for On the Convergence of Local Descent Methods in Federated Learning
Viaarxiv icon

Efficient Fair Principal Component Analysis

Add code
Nov 12, 2019
Figure 1 for Efficient Fair Principal Component Analysis
Figure 2 for Efficient Fair Principal Component Analysis
Figure 3 for Efficient Fair Principal Component Analysis
Figure 4 for Efficient Fair Principal Component Analysis
Viaarxiv icon

Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization

Add code
Oct 30, 2019
Figure 1 for Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Figure 2 for Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Figure 3 for Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Figure 4 for Local SGD with Periodic Averaging: Tighter Analysis and Adaptive Synchronization
Viaarxiv icon