Picture for Yair Lakretz

Yair Lakretz

UNICOG-U992

What Makes Two Language Models Think Alike?

Add code
Jun 24, 2024
Viaarxiv icon

What makes two models think alike?

Add code
Jun 18, 2024
Viaarxiv icon

Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations

Add code
Feb 18, 2024
Figure 1 for Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
Figure 2 for Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
Figure 3 for Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
Figure 4 for Metric-Learning Encoding Models Identify Processing Profiles of Linguistic Features in BERT's Representations
Viaarxiv icon

Language acquisition: do children and language models follow similar learning stages?

Add code
Jun 06, 2023
Figure 1 for Language acquisition: do children and language models follow similar learning stages?
Figure 2 for Language acquisition: do children and language models follow similar learning stages?
Figure 3 for Language acquisition: do children and language models follow similar learning stages?
Figure 4 for Language acquisition: do children and language models follow similar learning stages?
Viaarxiv icon

Probing Brain Context-Sensitivity with Masked-Attention Generation

Add code
May 23, 2023
Figure 1 for Probing Brain Context-Sensitivity with Masked-Attention Generation
Figure 2 for Probing Brain Context-Sensitivity with Masked-Attention Generation
Viaarxiv icon

Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context

Add code
Feb 28, 2023
Figure 1 for Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
Figure 2 for Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
Figure 3 for Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
Figure 4 for Information-Restricted Neural Language Models Reveal Different Brain Regions' Sensitivity to Semantics, Syntax and Context
Viaarxiv icon

Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps

Add code
Jul 07, 2022
Figure 1 for Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps
Figure 2 for Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps
Figure 3 for Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps
Figure 4 for Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans

Add code
Oct 14, 2021
Figure 1 for Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Figure 2 for Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Figure 3 for Causal Transformers Perform Below Chance on Recursive Nested Constructions, Unlike Humans
Viaarxiv icon

Can RNNs learn Recursive Nested Subject-Verb Agreements?

Add code
Jan 06, 2021
Figure 1 for Can RNNs learn Recursive Nested Subject-Verb Agreements?
Figure 2 for Can RNNs learn Recursive Nested Subject-Verb Agreements?
Figure 3 for Can RNNs learn Recursive Nested Subject-Verb Agreements?
Figure 4 for Can RNNs learn Recursive Nested Subject-Verb Agreements?
Viaarxiv icon