Picture for Ohad Shamir

Ohad Shamir

Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks

Add code
May 25, 2023
Viaarxiv icon

From Tempered to Benign Overfitting in ReLU Neural Networks

Add code
May 24, 2023
Viaarxiv icon

Deterministic Nonsmooth Nonconvex Optimization

Add code
Feb 16, 2023
Viaarxiv icon

On the Complexity of Finding Small Subgradients in Nonsmooth Optimization

Add code
Sep 21, 2022
Figure 1 for On the Complexity of Finding Small Subgradients in Nonsmooth Optimization
Viaarxiv icon

Reconstructing Training Data from Trained Neural Networks

Add code
Jun 15, 2022
Figure 1 for Reconstructing Training Data from Trained Neural Networks
Figure 2 for Reconstructing Training Data from Trained Neural Networks
Figure 3 for Reconstructing Training Data from Trained Neural Networks
Figure 4 for Reconstructing Training Data from Trained Neural Networks
Viaarxiv icon

The Sample Complexity of One-Hidden-Layer Neural Networks

Add code
Feb 13, 2022
Figure 1 for The Sample Complexity of One-Hidden-Layer Neural Networks
Viaarxiv icon

The Implicit Bias of Benign Overfitting

Add code
Feb 13, 2022
Figure 1 for The Implicit Bias of Benign Overfitting
Viaarxiv icon

Gradient Methods Provably Converge to Non-Robust Networks

Add code
Feb 09, 2022
Figure 1 for Gradient Methods Provably Converge to Non-Robust Networks
Figure 2 for Gradient Methods Provably Converge to Non-Robust Networks
Viaarxiv icon

Width is Less Important than Depth in ReLU Neural Networks

Add code
Feb 08, 2022
Viaarxiv icon

Implicit Regularization Towards Rank Minimization in ReLU Networks

Add code
Jan 30, 2022
Figure 1 for Implicit Regularization Towards Rank Minimization in ReLU Networks
Figure 2 for Implicit Regularization Towards Rank Minimization in ReLU Networks
Figure 3 for Implicit Regularization Towards Rank Minimization in ReLU Networks
Viaarxiv icon