Picture for Shunta Akiyama

Shunta Akiyama

Block Coordinate Descent for Neural Networks Provably Finds Global Minima

Add code
Oct 26, 2025
Viaarxiv icon

Diffusion Models are Minimax Optimal Distribution Estimators

Add code
Mar 03, 2023
Viaarxiv icon

Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods

Add code
Jun 06, 2022
Figure 1 for Excess Risk of Two-Layer ReLU Neural Networks in Teacher-Student Settings and its Superiority to Kernel Methods
Viaarxiv icon

On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting

Add code
Jun 29, 2021
Figure 1 for On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting
Figure 2 for On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting
Figure 3 for On Learnability via Gradient Method for Two-Layer ReLU Neural Networks in Teacher-Student Setting
Viaarxiv icon

Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods

Add code
Dec 06, 2020
Viaarxiv icon