Picture for Rong Ge

Rong Ge

Clemson University

Customizing ML Predictions for Online Algorithms

Add code
May 18, 2022
Figure 1 for Customizing ML Predictions for Online Algorithms
Figure 2 for Customizing ML Predictions for Online Algorithms
Figure 3 for Customizing ML Predictions for Online Algorithms
Viaarxiv icon

Online Algorithms with Multiple Predictions

Add code
May 08, 2022
Viaarxiv icon

Towards Understanding the Data Dependency of Mixup-style Training

Add code
Oct 14, 2021
Figure 1 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 2 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 3 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 4 for Towards Understanding the Data Dependency of Mixup-style Training
Viaarxiv icon

Outlier-Robust Sparse Estimation via Non-Convex Optimization

Add code
Sep 23, 2021
Figure 1 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Figure 2 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Figure 3 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Viaarxiv icon

Understanding Deflation Process in Over-parametrized Tensor Decomposition

Add code
Jun 11, 2021
Figure 1 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 2 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 3 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 4 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Viaarxiv icon

A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

Add code
Feb 04, 2021
Figure 1 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 2 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 3 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 4 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Viaarxiv icon

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Add code
Oct 22, 2020
Figure 1 for Beyond Lazy Training for Over-parameterized Tensor Decomposition
Viaarxiv icon

Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks

Add code
Oct 08, 2020
Figure 1 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 2 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 3 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 4 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Viaarxiv icon

Efficient sampling from the Bingham distribution

Add code
Sep 30, 2020
Viaarxiv icon

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Add code
Jun 30, 2020
Figure 1 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 2 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 3 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 4 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Viaarxiv icon