Picture for Tal Linzen

Tal Linzen

Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models

Add code
Mar 17, 2022
Figure 1 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 2 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 3 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Figure 4 for Coloring the Blank Slate: Pre-training Imparts a Hierarchical Inductive Bias to Sequence-to-sequence Models
Viaarxiv icon

Improving Compositional Generalization with Latent Structure and Data Augmentation

Add code
Dec 14, 2021
Figure 1 for Improving Compositional Generalization with Latent Structure and Data Augmentation
Figure 2 for Improving Compositional Generalization with Latent Structure and Data Augmentation
Figure 3 for Improving Compositional Generalization with Latent Structure and Data Augmentation
Figure 4 for Improving Compositional Generalization with Latent Structure and Data Augmentation
Viaarxiv icon

How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN

Add code
Nov 18, 2021
Figure 1 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 2 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 3 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Figure 4 for How much do language models copy from their training data? Evaluating linguistic novelty in text generation using RAVEN
Viaarxiv icon

Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks

Add code
Nov 09, 2021
Figure 1 for Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Figure 2 for Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Figure 3 for Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Figure 4 for Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks
Viaarxiv icon

The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation

Add code
Sep 16, 2021
Figure 1 for The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Figure 2 for The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Figure 3 for The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Figure 4 for The Language Model Understood the Prompt was Ambiguous: Probing Syntactic Uncertainty Through Generation
Viaarxiv icon

Frequency Effects on Syntactic Rule Learning in Transformers

Add code
Sep 14, 2021
Figure 1 for Frequency Effects on Syntactic Rule Learning in Transformers
Figure 2 for Frequency Effects on Syntactic Rule Learning in Transformers
Figure 3 for Frequency Effects on Syntactic Rule Learning in Transformers
Figure 4 for Frequency Effects on Syntactic Rule Learning in Transformers
Viaarxiv icon

NOPE: A Corpus of Naturally-Occurring Presuppositions in English

Add code
Sep 14, 2021
Figure 1 for NOPE: A Corpus of Naturally-Occurring Presuppositions in English
Figure 2 for NOPE: A Corpus of Naturally-Occurring Presuppositions in English
Figure 3 for NOPE: A Corpus of Naturally-Occurring Presuppositions in English
Figure 4 for NOPE: A Corpus of Naturally-Occurring Presuppositions in English
Viaarxiv icon

The MultiBERTs: BERT Reproductions for Robustness Analysis

Add code
Jun 30, 2021
Figure 1 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 2 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 3 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 4 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Viaarxiv icon

Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models

Add code
Jun 22, 2021
Figure 1 for Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Figure 2 for Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Figure 3 for Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Figure 4 for Causal Analysis of Syntactic Agreement Mechanisms in Neural Language Models
Viaarxiv icon

Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction

Add code
May 19, 2021
Figure 1 for Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Figure 2 for Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Figure 3 for Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Figure 4 for Counterfactual Interventions Reveal the Causal Effect of Relative Clause Representations on Agreement Prediction
Viaarxiv icon