Alert button
Picture for Rong Ge

Rong Ge

Alert button

Towards Understanding the Data Dependency of Mixup-style Training

Add code
Bookmark button
Alert button
Oct 14, 2021
Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge

Figure 1 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 2 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 3 for Towards Understanding the Data Dependency of Mixup-style Training
Figure 4 for Towards Understanding the Data Dependency of Mixup-style Training
Viaarxiv icon

Outlier-Robust Sparse Estimation via Non-Convex Optimization

Add code
Bookmark button
Alert button
Sep 23, 2021
Yu Cheng, Ilias Diakonikolas, Daniel M. Kane, Rong Ge, Shivam Gupta, Mahdi Soltanolkotabi

Figure 1 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Figure 2 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Figure 3 for Outlier-Robust Sparse Estimation via Non-Convex Optimization
Viaarxiv icon

Understanding Deflation Process in Over-parametrized Tensor Decomposition

Add code
Bookmark button
Alert button
Jun 11, 2021
Rong Ge, Yunwei Ren, Xiang Wang, Mo Zhou

Figure 1 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 2 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 3 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Figure 4 for Understanding Deflation Process in Over-parametrized Tensor Decomposition
Viaarxiv icon

A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network

Add code
Bookmark button
Alert button
Feb 04, 2021
Mo Zhou, Rong Ge, Chi Jin

Figure 1 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 2 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 3 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Figure 4 for A Local Convergence Theory for Mildly Over-Parameterized Two-Layer Neural Network
Viaarxiv icon

Beyond Lazy Training for Over-parameterized Tensor Decomposition

Add code
Bookmark button
Alert button
Oct 22, 2020
Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge

Figure 1 for Beyond Lazy Training for Over-parameterized Tensor Decomposition
Viaarxiv icon

Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks

Add code
Bookmark button
Alert button
Oct 08, 2020
Yikai Wu, Xingyu Zhu, Chenwei Wu, Annie Wang, Rong Ge

Figure 1 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 2 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 3 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Figure 4 for Dissecting Hessian: Understanding Common Structure of Hessian in Neural Networks
Viaarxiv icon

Efficient sampling from the Bingham distribution

Add code
Bookmark button
Alert button
Sep 30, 2020
Rong Ge, Holden Lee, Jianfeng Lu, Andrej Risteski

Viaarxiv icon

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

Add code
Bookmark button
Alert button
Jun 30, 2020
Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge

Figure 1 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 2 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 3 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Figure 4 for Guarantees for Tuning the Step Size using a Learning-to-Learn Approach
Viaarxiv icon

Optimization Landscape of Tucker Decomposition

Add code
Bookmark button
Alert button
Jun 29, 2020
Abraham Frandsen, Rong Ge

Figure 1 for Optimization Landscape of Tucker Decomposition
Viaarxiv icon

Extracting Latent State Representations with Linear Dynamics from Rich Observations

Add code
Bookmark button
Alert button
Jun 29, 2020
Abraham Frandsen, Rong Ge

Figure 1 for Extracting Latent State Representations with Linear Dynamics from Rich Observations
Figure 2 for Extracting Latent State Representations with Linear Dynamics from Rich Observations
Figure 3 for Extracting Latent State Representations with Linear Dynamics from Rich Observations
Figure 4 for Extracting Latent State Representations with Linear Dynamics from Rich Observations
Viaarxiv icon