Picture for Eran Malach

Eran Malach

Transcendence: Generative Models Can Outperform The Experts That Train Them

Add code
Jun 17, 2024
Viaarxiv icon

The Evolution of Statistical Induction Heads: In-Context Learning Markov Chains

Add code
Feb 16, 2024
Viaarxiv icon

Repeat After Me: Transformers are Better than State Space Models at Copying

Add code
Feb 01, 2024
Viaarxiv icon

Auto-Regressive Next-Token Predictors are Universal Learners

Add code
Sep 13, 2023
Viaarxiv icon

Pareto Frontiers in Neural Feature Learning: Data, Compute, Width, and Luck

Add code
Sep 07, 2023
Viaarxiv icon

Corgi^2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD

Add code
Sep 04, 2023
Viaarxiv icon

SubTuning: Efficient Finetuning for Multi-Task Learning

Add code
Feb 14, 2023
Viaarxiv icon

Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit

Add code
Jul 18, 2022
Figure 1 for Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Figure 2 for Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Figure 3 for Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Figure 4 for Hidden Progress in Deep Learning: SGD Learns Parities Near the Computational Limit
Viaarxiv icon

Knowledge Distillation: Bad Models Can Be Good Role Models

Add code
Mar 28, 2022
Figure 1 for Knowledge Distillation: Bad Models Can Be Good Role Models
Figure 2 for Knowledge Distillation: Bad Models Can Be Good Role Models
Viaarxiv icon

On the Power of Differentiable Learning versus PAC and SQ Learning

Add code
Aug 09, 2021
Figure 1 for On the Power of Differentiable Learning versus PAC and SQ Learning
Viaarxiv icon