Picture for Xunpeng Huang

Xunpeng Huang

Faster Sampling via Stochastic Gradient Proximal Sampler

Add code
May 27, 2024
Viaarxiv icon

Reverse Transition Kernel: A Flexible Framework to Accelerate Diffusion Inference

Add code
May 26, 2024
Viaarxiv icon

An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling

Add code
Mar 10, 2024
Figure 1 for An Improved Analysis of Langevin Algorithms with Prior Diffusion for Non-Log-Concave Sampling
Viaarxiv icon

Faster Sampling without Isoperimetry via Diffusion-based Monte Carlo

Add code
Jan 12, 2024
Viaarxiv icon

Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach

Add code
Jul 05, 2023
Figure 1 for Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach
Figure 2 for Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach
Figure 3 for Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach
Figure 4 for Monte Carlo Sampling without Isoperimetry: A Reverse Diffusion Approach
Viaarxiv icon

Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates

Add code
May 19, 2022
Figure 1 for Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates
Figure 2 for Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates
Figure 3 for Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates
Figure 4 for Mean-Field Analysis of Two-Layer Neural Networks: Global Optimality with Linear Convergence Rates
Viaarxiv icon

ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization

Add code
Jun 12, 2020
Figure 1 for ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
Figure 2 for ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
Figure 3 for ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
Figure 4 for ACMo: Angle-Calibrated Moment Methods for Stochastic Optimization
Viaarxiv icon

Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs

Add code
Jun 12, 2020
Figure 1 for Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
Figure 2 for Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
Figure 3 for Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
Figure 4 for Adaptive Gradient Methods Can Be Provably Faster than SGD after Finite Epochs
Viaarxiv icon

SPAN: A Stochastic Projected Approximate Newton Method

Add code
Mar 03, 2020
Figure 1 for SPAN: A Stochastic Projected Approximate Newton Method
Figure 2 for SPAN: A Stochastic Projected Approximate Newton Method
Figure 3 for SPAN: A Stochastic Projected Approximate Newton Method
Figure 4 for SPAN: A Stochastic Projected Approximate Newton Method
Viaarxiv icon

Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective

Add code
Mar 05, 2018
Figure 1 for Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective
Figure 2 for Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective
Figure 3 for Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective
Figure 4 for Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective
Viaarxiv icon