Picture for Lam M. Nguyen

Lam M. Nguyen

Differential Private Hogwild! over Distributed Local Data Sets

Add code
Feb 17, 2021
Figure 1 for Differential Private Hogwild! over Distributed Local Data Sets
Figure 2 for Differential Private Hogwild! over Distributed Local Data Sets
Figure 3 for Differential Private Hogwild! over Distributed Local Data Sets
Figure 4 for Differential Private Hogwild! over Distributed Local Data Sets
Viaarxiv icon

Shuffling Gradient-Based Methods with Momentum

Add code
Nov 24, 2020
Figure 1 for Shuffling Gradient-Based Methods with Momentum
Figure 2 for Shuffling Gradient-Based Methods with Momentum
Figure 3 for Shuffling Gradient-Based Methods with Momentum
Figure 4 for Shuffling Gradient-Based Methods with Momentum
Viaarxiv icon

A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees

Add code
Nov 06, 2020
Figure 1 for A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
Figure 2 for A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
Figure 3 for A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
Figure 4 for A Scalable MIP-based Method for Learning Optimal Multivariate Decision Trees
Viaarxiv icon

Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

Add code
Oct 27, 2020
Figure 1 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 2 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 3 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Figure 4 for Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes
Viaarxiv icon

An Optimal Hybrid Variance-Reduced Algorithm for Stochastic Composite Nonconvex Optimization

Add code
Aug 20, 2020
Viaarxiv icon

Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise

Add code
Jul 17, 2020
Figure 1 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 2 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 3 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Figure 4 for Asynchronous Federated Learning with Reduced Number of Rounds and with Differential Privacy from Less Aggregated Gaussian Noise
Viaarxiv icon

Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems

Add code
Jun 27, 2020
Figure 1 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 2 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 3 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Figure 4 for Hybrid Variance-Reduced SGD Algorithms For Nonconvex-Concave Minimax Problems
Viaarxiv icon

Finite-Time Analysis of Stochastic Gradient Descent under Markov Randomness

Add code
Apr 01, 2020
Viaarxiv icon

A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning

Add code
Mar 01, 2020
Figure 1 for A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning
Figure 2 for A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning
Figure 3 for A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning
Figure 4 for A Hybrid Stochastic Policy Gradient Algorithm for Reinforcement Learning
Viaarxiv icon

A Unified Convergence Analysis for Shuffling-Type Gradient Methods

Add code
Feb 19, 2020
Figure 1 for A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Figure 2 for A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Figure 3 for A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Figure 4 for A Unified Convergence Analysis for Shuffling-Type Gradient Methods
Viaarxiv icon