Picture for Allyson Ettinger

Allyson Ettinger

On the Interplay Between Fine-tuning and Composition in Transformers

Add code
Jun 01, 2021
Figure 1 for On the Interplay Between Fine-tuning and Composition in Transformers
Figure 2 for On the Interplay Between Fine-tuning and Composition in Transformers
Figure 3 for On the Interplay Between Fine-tuning and Composition in Transformers
Figure 4 for On the Interplay Between Fine-tuning and Composition in Transformers
Viaarxiv icon

Do language models learn typicality judgments from text?

Add code
May 06, 2021
Figure 1 for Do language models learn typicality judgments from text?
Figure 2 for Do language models learn typicality judgments from text?
Figure 3 for Do language models learn typicality judgments from text?
Figure 4 for Do language models learn typicality judgments from text?
Viaarxiv icon

Assessing Phrasal Representation and Composition in Transformers

Add code
Oct 14, 2020
Figure 1 for Assessing Phrasal Representation and Composition in Transformers
Figure 2 for Assessing Phrasal Representation and Composition in Transformers
Figure 3 for Assessing Phrasal Representation and Composition in Transformers
Figure 4 for Assessing Phrasal Representation and Composition in Transformers
Viaarxiv icon

Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming

Add code
Oct 06, 2020
Figure 1 for Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Figure 2 for Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Figure 3 for Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Figure 4 for Exploring BERT's Sensitivity to Lexical Cues using Tests from Semantic Priming
Viaarxiv icon

Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks

Add code
Oct 06, 2020
Figure 1 for Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Figure 2 for Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Figure 3 for Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Figure 4 for Learning to Ignore: Long Document Coreference with Bounded Memory Neural Networks
Viaarxiv icon

Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size

Add code
Aug 16, 2020
Figure 1 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 2 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 3 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Figure 4 for Adding Recurrence to Pretrained Transformers for Improved Efficiency and Context Size
Viaarxiv icon

PeTra: A Sparsely Supervised Memory Model for People Tracking

Add code
May 06, 2020
Figure 1 for PeTra: A Sparsely Supervised Memory Model for People Tracking
Figure 2 for PeTra: A Sparsely Supervised Memory Model for People Tracking
Figure 3 for PeTra: A Sparsely Supervised Memory Model for People Tracking
Figure 4 for PeTra: A Sparsely Supervised Memory Model for People Tracking
Viaarxiv icon

Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words

Add code
May 04, 2020
Figure 1 for Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Figure 2 for Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Figure 3 for Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Figure 4 for Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
Viaarxiv icon

What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models

Add code
Jul 31, 2019
Figure 1 for What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Figure 2 for What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Figure 3 for What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Figure 4 for What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models
Viaarxiv icon

Assessing Composition in Sentence Vector Representations

Add code
Sep 11, 2018
Figure 1 for Assessing Composition in Sentence Vector Representations
Figure 2 for Assessing Composition in Sentence Vector Representations
Figure 3 for Assessing Composition in Sentence Vector Representations
Viaarxiv icon