Alert button
Picture for Lisa Bylinina

Lisa Bylinina

Alert button

Too Much Information: Keeping Training Simple for BabyLMs

Add code
Bookmark button
Alert button
Nov 03, 2023
Lukas Edman, Lisa Bylinina

Viaarxiv icon

Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations

Add code
Bookmark button
Alert button
Jun 04, 2023
Aleksey Tikhonov, Lisa Bylinina, Denis Paperno

Figure 1 for Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations
Figure 2 for Leverage Points in Modality Shifts: Comparing Language-only and Multimodal Word Representations
Viaarxiv icon

Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models

Add code
Bookmark button
Alert button
Sep 13, 2021
Lisa Bylinina, Alexey Tikhonov, Ekaterina Garmash

Figure 1 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 2 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 3 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Figure 4 for Old BERT, New Tricks: Artificial Language Learning for Pre-Trained Language Models
Viaarxiv icon

Transformers in the loop: Polarity in neural models of language

Add code
Bookmark button
Alert button
Sep 08, 2021
Lisa Bylinina, Alexey Tikhonov

Figure 1 for Transformers in the loop: Polarity in neural models of language
Figure 2 for Transformers in the loop: Polarity in neural models of language
Figure 3 for Transformers in the loop: Polarity in neural models of language
Figure 4 for Transformers in the loop: Polarity in neural models of language
Viaarxiv icon