Picture for Tal Linzen

Tal Linzen

Language Models Struggle to Use Representations Learned In-Context

Add code
Feb 04, 2026
Viaarxiv icon

Deconstructing sentence disambiguation by joint latent modeling of reading paradigms: LLM surprisal is not enough

Add code
Feb 04, 2026
Viaarxiv icon

RELIC: Evaluating Compositional Instruction Following via Language Recognition

Add code
Jun 05, 2025
Viaarxiv icon

Multilingual Prompting for Improving LLM Generation Diversity

Add code
May 21, 2025
Figure 1 for Multilingual Prompting for Improving LLM Generation Diversity
Figure 2 for Multilingual Prompting for Improving LLM Generation Diversity
Figure 3 for Multilingual Prompting for Improving LLM Generation Diversity
Figure 4 for Multilingual Prompting for Improving LLM Generation Diversity
Viaarxiv icon

Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora

Add code
Apr 10, 2025
Figure 1 for Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Figure 2 for Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Figure 3 for Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Figure 4 for Findings of the BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora
Viaarxiv icon

Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases

Add code
Feb 26, 2025
Figure 1 for Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases
Figure 2 for Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases
Figure 3 for Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases
Figure 4 for Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases
Viaarxiv icon

Rapid Word Learning Through Meta In-Context Learning

Add code
Feb 20, 2025
Figure 1 for Rapid Word Learning Through Meta In-Context Learning
Figure 2 for Rapid Word Learning Through Meta In-Context Learning
Figure 3 for Rapid Word Learning Through Meta In-Context Learning
Figure 4 for Rapid Word Learning Through Meta In-Context Learning
Viaarxiv icon

Findings of the Second BabyLM Challenge: Sample-Efficient Pretraining on Developmentally Plausible Corpora

Add code
Dec 06, 2024
Viaarxiv icon

What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length

Add code
Nov 04, 2024
Figure 1 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 2 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 3 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Figure 4 for What Goes Into a LM Acceptability Judgment? Rethinking the Impact of Frequency and Length
Viaarxiv icon

How Does Code Pretraining Affect Language Model Task Performance?

Add code
Sep 06, 2024
Figure 1 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 2 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 3 for How Does Code Pretraining Affect Language Model Task Performance?
Figure 4 for How Does Code Pretraining Affect Language Model Task Performance?
Viaarxiv icon