Picture for Alex Damian

Alex Damian

The Generative Leap: Sharp Sample Complexity for Efficiently Learning Gaussian Multi-Index Models

Add code
Jun 05, 2025
Viaarxiv icon

Learning Compositional Functions with Transformers from Easy-to-Hard Data

Add code
May 29, 2025
Viaarxiv icon

Understanding Optimization in Deep Learning with Central Flows

Add code
Oct 31, 2024
Viaarxiv icon

Computational-Statistical Gaps in Gaussian Single-Index Models

Add code
Mar 12, 2024
Viaarxiv icon

How Transformers Learn Causal Structure with Gradient Descent

Add code
Feb 22, 2024
Viaarxiv icon

Fine-Tuning Language Models with Just Forward Passes

Add code
May 27, 2023
Figure 1 for Fine-Tuning Language Models with Just Forward Passes
Figure 2 for Fine-Tuning Language Models with Just Forward Passes
Figure 3 for Fine-Tuning Language Models with Just Forward Passes
Figure 4 for Fine-Tuning Language Models with Just Forward Passes
Viaarxiv icon

Smoothing the Landscape Boosts the Signal for SGD: Optimal Sample Complexity for Learning Single Index Models

Add code
May 18, 2023
Viaarxiv icon

Provable Guarantees for Nonlinear Feature Learning in Three-Layer Neural Networks

Add code
May 11, 2023
Viaarxiv icon

Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability

Add code
Sep 30, 2022
Figure 1 for Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability
Figure 2 for Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability
Figure 3 for Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability
Figure 4 for Self-Stabilization: The Implicit Bias of Gradient Descent at the Edge of Stability
Viaarxiv icon

Neural Networks can Learn Representations with Gradient Descent

Add code
Jun 30, 2022
Figure 1 for Neural Networks can Learn Representations with Gradient Descent
Figure 2 for Neural Networks can Learn Representations with Gradient Descent
Viaarxiv icon