Alert button
Picture for James A. Michaelov

James A. Michaelov

Alert button

Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual Language Models

Add code
Bookmark button
Alert button
Nov 15, 2023
James A. Michaelov, Catherine Arnett, Tyler A. Chang, Benjamin K. Bergen

Viaarxiv icon

Crosslingual Structural Priming and the Pre-Training Dynamics of Bilingual Language Models

Add code
Bookmark button
Alert button
Oct 11, 2023
Catherine Arnett, Tyler A. Chang, James A. Michaelov, Benjamin K. Bergen

Viaarxiv icon

Emergent inabilities? Inverse scaling over the course of pretraining

Add code
Bookmark button
Alert button
May 24, 2023
James A. Michaelov, Benjamin K. Bergen

Viaarxiv icon

Can Peanuts Fall in Love with Distributional Semantics?

Add code
Bookmark button
Alert button
Jan 20, 2023
James A. Michaelov, Seana Coulson, Benjamin K. Bergen

Figure 1 for Can Peanuts Fall in Love with Distributional Semantics?
Figure 2 for Can Peanuts Fall in Love with Distributional Semantics?
Figure 3 for Can Peanuts Fall in Love with Distributional Semantics?
Viaarxiv icon

'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers

Add code
Bookmark button
Alert button
Dec 16, 2022
James A. Michaelov, Benjamin K. Bergen

Figure 1 for 'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers
Figure 2 for 'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers
Figure 3 for 'Rarely' a problem? Language models exhibit inverse scaling in their predictions following 'few'-type quantifiers
Viaarxiv icon

Collateral facilitation in humans and language models

Add code
Bookmark button
Alert button
Nov 09, 2022
James A. Michaelov, Benjamin K. Bergen

Figure 1 for Collateral facilitation in humans and language models
Figure 2 for Collateral facilitation in humans and language models
Figure 3 for Collateral facilitation in humans and language models
Figure 4 for Collateral facilitation in humans and language models
Viaarxiv icon

Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?

Add code
Bookmark button
Alert button
Aug 30, 2022
James A. Michaelov, Benjamin K. Bergen

Figure 1 for Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
Figure 2 for Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
Figure 3 for Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
Figure 4 for Do language models make human-like predictions about the coreferents of Italian anaphoric zero pronouns?
Viaarxiv icon

So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements

Add code
Bookmark button
Alert button
Sep 02, 2021
James A. Michaelov, Seana Coulson, Benjamin K. Bergen

Figure 1 for So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
Figure 2 for So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
Figure 3 for So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
Figure 4 for So Cloze yet so Far: N400 Amplitude is Better Predicted by Distributional Information than Human Predictability Judgements
Viaarxiv icon

Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?

Add code
Bookmark button
Alert button
Jul 20, 2021
James A. Michaelov, Megan D. Bardolph, Seana Coulson, Benjamin K. Bergen

Figure 1 for Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
Figure 2 for Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
Figure 3 for Different kinds of cognitive plausibility: why are transformers better than RNNs at predicting N400 amplitude?
Viaarxiv icon

How well does surprisal explain N400 amplitude under different experimental conditions?

Add code
Bookmark button
Alert button
Oct 09, 2020
James A. Michaelov, Benjamin K. Bergen

Figure 1 for How well does surprisal explain N400 amplitude under different experimental conditions?
Viaarxiv icon