Picture for Itay Safran

Itay Safran

On the Rate of Convergence of GD in Non-linear Neural Networks: An Adversarial Robustness Perspective

Add code
Mar 02, 2026
Viaarxiv icon

The Median is Easier than it Looks: Approximation with a Constant-Depth, Linear-Width ReLU Network

Add code
Feb 06, 2026
Viaarxiv icon

To Grok Grokking: Provable Grokking in Ridge Regression

Add code
Jan 27, 2026
Viaarxiv icon

A Depth Hierarchy for Computing the Maximum in ReLU Networks via Extremal Graph Theory

Add code
Jan 04, 2026
Viaarxiv icon

Provable Privacy Attacks on Trained Shallow Neural Networks

Add code
Oct 10, 2024
Viaarxiv icon

Depth Separations in Neural Networks: Separating the Dimension from the Accuracy

Add code
Feb 11, 2024
Viaarxiv icon

How Many Neurons Does it Take to Approximate the Maximum?

Add code
Jul 18, 2023
Figure 1 for How Many Neurons Does it Take to Approximate the Maximum?
Figure 2 for How Many Neurons Does it Take to Approximate the Maximum?
Figure 3 for How Many Neurons Does it Take to Approximate the Maximum?
Viaarxiv icon

On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias

Add code
May 18, 2022
Figure 1 for On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Figure 2 for On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
Viaarxiv icon

Optimization-Based Separations for Neural Networks

Add code
Dec 04, 2021
Viaarxiv icon

Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems

Add code
Jun 12, 2021
Figure 1 for Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Figure 2 for Random Shuffling Beats SGD Only After Many Epochs on Ill-Conditioned Problems
Viaarxiv icon