Picture for Gilad Yehudai

Gilad Yehudai

From Tempered to Benign Overfitting in ReLU Neural Networks

Add code
May 24, 2023
Viaarxiv icon

Reconstructing Training Data from Multiclass Neural Networks

Add code
May 05, 2023
Viaarxiv icon

Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Data Manifolds

Add code
Mar 01, 2023
Viaarxiv icon

Reconstructing Training Data from Trained Neural Networks

Add code
Jun 15, 2022
Figure 1 for Reconstructing Training Data from Trained Neural Networks
Figure 2 for Reconstructing Training Data from Trained Neural Networks
Figure 3 for Reconstructing Training Data from Trained Neural Networks
Figure 4 for Reconstructing Training Data from Trained Neural Networks
Viaarxiv icon

Gradient Methods Provably Converge to Non-Robust Networks

Add code
Feb 09, 2022
Figure 1 for Gradient Methods Provably Converge to Non-Robust Networks
Figure 2 for Gradient Methods Provably Converge to Non-Robust Networks
Viaarxiv icon

Width is Less Important than Depth in ReLU Neural Networks

Add code
Feb 08, 2022
Viaarxiv icon

On the Optimal Memorization Power of ReLU Neural Networks

Add code
Oct 07, 2021
Viaarxiv icon

Learning a Single Neuron with Bias Using Gradient Descent

Add code
Jun 02, 2021
Figure 1 for Learning a Single Neuron with Bias Using Gradient Descent
Figure 2 for Learning a Single Neuron with Bias Using Gradient Descent
Viaarxiv icon

The Connection Between Approximation, Depth Separation and Learnability in Neural Networks

Add code
Jan 31, 2021
Viaarxiv icon

On Size Generalization in Graph Neural Networks

Add code
Oct 17, 2020
Figure 1 for On Size Generalization in Graph Neural Networks
Figure 2 for On Size Generalization in Graph Neural Networks
Figure 3 for On Size Generalization in Graph Neural Networks
Figure 4 for On Size Generalization in Graph Neural Networks
Viaarxiv icon