Picture for Quoc Tran-Dinh

Quoc Tran-Dinh

Accelerated Randomized Block-Coordinate Algorithms for Co-coercive Equations and Applications

Add code
Jan 08, 2023
Viaarxiv icon

Gradient Descent-Type Methods: Background and Simple Unified Convergence Analysis

Add code
Dec 19, 2022
Viaarxiv icon

Halpern-Type Accelerated and Splitting Algorithms For Monotone Inclusions

Add code
Oct 15, 2021
Viaarxiv icon

Federated Learning with Randomized Douglas-Rachford Splitting Methods

Add code
Mar 05, 2021
Figure 1 for Federated Learning with Randomized Douglas-Rachford Splitting Methods
Figure 2 for Federated Learning with Randomized Douglas-Rachford Splitting Methods
Figure 3 for Federated Learning with Randomized Douglas-Rachford Splitting Methods
Figure 4 for Federated Learning with Randomized Douglas-Rachford Splitting Methods
Viaarxiv icon

Shuffling Gradient-Based Methods with Momentum

Add code
Nov 24, 2020
Figure 1 for Shuffling Gradient-Based Methods with Momentum
Figure 2 for Shuffling Gradient-Based Methods with Momentum
Figure 3 for Shuffling Gradient-Based Methods with Momentum
Figure 4 for Shuffling Gradient-Based Methods with Momentum
Viaarxiv icon

Convergence Analysis of Homotopy-SGD for non-convex optimization

Add code
Nov 20, 2020
Figure 1 for Convergence Analysis of Homotopy-SGD for non-convex optimization
Figure 2 for Convergence Analysis of Homotopy-SGD for non-convex optimization
Figure 3 for Convergence Analysis of Homotopy-SGD for non-convex optimization
Figure 4 for Convergence Analysis of Homotopy-SGD for non-convex optimization
Viaarxiv icon

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

Add code
Oct 27, 2020
Figure 1 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 2 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 3 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 4 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Viaarxiv icon

An Optimal Hybrid Variance-Reduced Algorithm for Stochastic Composite Nonconvex Optimization

Add code
Aug 20, 2020
Viaarxiv icon

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

Add code
Jul 17, 2020
Figure 1 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 2 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 3 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 4 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Viaarxiv icon

Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems

Add code
Jun 27, 2020
Figure 1 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 2 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 3 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 4 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Viaarxiv icon