Picture for Bao Wang

Bao Wang

Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum

Add code
Dec 03, 2020
Figure 1 for Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Figure 2 for Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Figure 3 for Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Figure 4 for Stochastic Gradient Descent with Nonlinear Conjugate Gradient-Style Adaptive Momentum
Viaarxiv icon

An Integrated Approach to Produce Robust Models with High Efficiency

Add code
Aug 31, 2020
Figure 1 for An Integrated Approach to Produce Robust Models with High Efficiency
Figure 2 for An Integrated Approach to Produce Robust Models with High Efficiency
Figure 3 for An Integrated Approach to Produce Robust Models with High Efficiency
Figure 4 for An Integrated Approach to Produce Robust Models with High Efficiency
Viaarxiv icon

MomentumRNN: Integrating Momentum into Recurrent Neural Networks

Add code
Jun 12, 2020
Figure 1 for MomentumRNN: Integrating Momentum into Recurrent Neural Networks
Figure 2 for MomentumRNN: Integrating Momentum into Recurrent Neural Networks
Figure 3 for MomentumRNN: Integrating Momentum into Recurrent Neural Networks
Figure 4 for MomentumRNN: Integrating Momentum into Recurrent Neural Networks
Viaarxiv icon

Exploring Private Federated Learning with Laplacian Smoothing

Add code
May 01, 2020
Figure 1 for Exploring Private Federated Learning with Laplacian Smoothing
Figure 2 for Exploring Private Federated Learning with Laplacian Smoothing
Figure 3 for Exploring Private Federated Learning with Laplacian Smoothing
Figure 4 for Exploring Private Federated Learning with Laplacian Smoothing
Viaarxiv icon

Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets

Add code
Mar 02, 2020
Figure 1 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 2 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 3 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Figure 4 for Sparsity Meets Robustness: Channel Pruning for the Feynman-Kac Formalism Principled Robust Deep Neural Nets
Viaarxiv icon

Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent

Add code
Feb 24, 2020
Figure 1 for Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Figure 2 for Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Figure 3 for Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Figure 4 for Scheduled Restart Momentum for Accelerated Stochastic Gradient Descent
Viaarxiv icon

Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo

Add code
Nov 02, 2019
Figure 1 for Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Figure 2 for Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Figure 3 for Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Figure 4 for Laplacian Smoothing Stochastic Gradient Markov Chain Monte Carlo
Viaarxiv icon

Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning

Add code
Jul 16, 2019
Figure 1 for Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Figure 2 for Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Figure 3 for Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Figure 4 for Graph Interpolating Activation Improves Both Natural and Robust Accuracies in Data-Efficient Deep Learning
Viaarxiv icon

DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM

Add code
Jun 28, 2019
Figure 1 for DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM
Figure 2 for DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM
Figure 3 for DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM
Figure 4 for DP-LSSGD: A Stochastic Optimization Method to Lift the Utility in Privacy-Preserving ERM
Viaarxiv icon

A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting

Add code
Feb 13, 2019
Figure 1 for A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting
Figure 2 for A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting
Figure 3 for A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting
Figure 4 for A Study on Graph-Structured Recurrent Neural Networks and Sparsification with Application to Epidemic Forecasting
Viaarxiv icon