Picture for Matus Telgarsky

Matus Telgarsky

UCSD

Actor-critic is implicitly biased towards high entropy optimal policies

Add code
Oct 21, 2021
Viaarxiv icon

Fast Margin Maximization via Dual Acceleration

Add code
Jul 01, 2021
Figure 1 for Fast Margin Maximization via Dual Acceleration
Figure 2 for Fast Margin Maximization via Dual Acceleration
Figure 3 for Fast Margin Maximization via Dual Acceleration
Figure 4 for Fast Margin Maximization via Dual Acceleration
Viaarxiv icon

Early-stopped neural networks are consistent

Add code
Jun 10, 2021
Figure 1 for Early-stopped neural networks are consistent
Viaarxiv icon

Generalization bounds via distillation

Add code
Apr 12, 2021
Figure 1 for Generalization bounds via distillation
Figure 2 for Generalization bounds via distillation
Figure 3 for Generalization bounds via distillation
Figure 4 for Generalization bounds via distillation
Viaarxiv icon

Gradient descent follows the regularization path for general losses

Add code
Jun 19, 2020
Figure 1 for Gradient descent follows the regularization path for general losses
Viaarxiv icon

Directional convergence and alignment in deep learning

Add code
Jun 11, 2020
Figure 1 for Directional convergence and alignment in deep learning
Figure 2 for Directional convergence and alignment in deep learning
Figure 3 for Directional convergence and alignment in deep learning
Viaarxiv icon

Neural tangent kernels, transportation mappings, and universal approximation

Add code
Oct 15, 2019
Viaarxiv icon

Polylogarithmic width suffices for gradient descent to achieve arbitrarily small test error with shallow ReLU networks

Add code
Sep 29, 2019
Viaarxiv icon

Approximation power of random neural networks

Add code
Jun 18, 2019
Viaarxiv icon

A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization

Add code
Jun 11, 2019
Figure 1 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Figure 2 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Figure 3 for A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
Viaarxiv icon