Alert button
Picture for Matus Telgarsky

Matus Telgarsky

Alert button

Early-stopped neural networks are consistent

Add code
Bookmark button
Alert button
Jun 10, 2021
Ziwei Ji, Justin D. Li, Matus Telgarsky

Figure 1 for Early-stopped neural networks are consistent
Viaarxiv icon

Generalization bounds via distillation

Add code
Bookmark button
Alert button
Apr 12, 2021
Daniel Hsu, Ziwei Ji, Matus Telgarsky, Lan Wang

Figure 1 for Generalization bounds via distillation
Figure 2 for Generalization bounds via distillation
Figure 3 for Generalization bounds via distillation
Figure 4 for Generalization bounds via distillation
Viaarxiv icon

Gradient descent follows the regularization path for general losses

Add code
Bookmark button
Alert button
Jun 19, 2020
Ziwei Ji, Miroslav Dudík, Robert E. Schapire, Matus Telgarsky

Figure 1 for Gradient descent follows the regularization path for general losses
Viaarxiv icon

Directional convergence and alignment in deep learning

Add code
Bookmark button
Alert button
Jun 11, 2020
Ziwei Ji, Matus Telgarsky

Figure 1 for Directional convergence and alignment in deep learning
Figure 2 for Directional convergence and alignment in deep learning
Figure 3 for Directional convergence and alignment in deep learning
Viaarxiv icon

Neural tangent kernels, transportation mappings, and universal approximation

Add code
Bookmark button
Alert button
Oct 15, 2019
Ziwei Ji, Matus Telgarsky, Ruicheng Xian

Viaarxiv icon

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

Add code
Bookmark button
Alert button
Sep 29, 2019
Ziwei Ji, Matus Telgarsky

Viaarxiv icon

Approximation power of random neural networks

Add code
Bookmark button
Alert button
Jun 18, 2019
Bolton Bailey, Ziwei Ji, Matus Telgarsky, Ruicheng Xian

Viaarxiv icon

A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization

Add code
Bookmark button
Alert button
Jun 11, 2019
Yucheng Chen, Matus Telgarsky, Chao Zhang, Bolton Bailey, Daniel Hsu, Jian Peng

Figure 1 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Figure 2 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Figure 3 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Viaarxiv icon

A refined primal-dual analysis of the implicit bias

Add code
Bookmark button
Alert button
Jun 11, 2019
Ziwei Ji, Matus Telgarsky

Viaarxiv icon