Alert button
Picture for Anna Rumshisky

Anna Rumshisky

Alert button

Emergent Abilities in Reduced-Scale Generative Language Models

Add code
Bookmark button
Alert button
Apr 02, 2024
Sherin Muckatira, Vijeta Deshpande, Vladislav Lialin, Anna Rumshisky

Viaarxiv icon

Deconstructing In-Context Learning: Understanding Prompts via Corruption

Add code
Bookmark button
Alert button
Apr 02, 2024
Namrata Shivagunde, Vladislav Lialin, Sherin Muckatira, Anna Rumshisky

Viaarxiv icon

Prompt Perturbation Consistency Learning for Robust Language Models

Add code
Bookmark button
Alert button
Feb 24, 2024
Yao Qiang, Subhrangshu Nandi, Ninareh Mehrabi, Greg Ver Steeg, Anoop Kumar, Anna Rumshisky, Aram Galstyan

Viaarxiv icon

Let's Reinforce Step by Step

Add code
Bookmark button
Alert button
Nov 10, 2023
Sarah Pan, Vladislav Lialin, Sherin Muckatira, Anna Rumshisky

Figure 1 for Let's Reinforce Step by Step
Figure 2 for Let's Reinforce Step by Step
Viaarxiv icon

Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

Add code
Bookmark button
Alert button
Jul 13, 2023
Vladislav Lialin, Namrata Shivagunde, Sherin Muckatira, Anna Rumshisky

Figure 1 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 2 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 3 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 4 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Viaarxiv icon

Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models

Add code
Bookmark button
Alert button
Jun 14, 2023
Saleh Soltan, Andy Rosenbaum, Tobias Falke, Qin Lu, Anna Rumshisky, Wael Hamza

Figure 1 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 2 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 3 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Figure 4 for Recipes for Sequential Pre-training of Multilingual Encoder and Seq2Seq Models
Viaarxiv icon

Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale

Add code
Bookmark button
Alert button
May 30, 2023
Vijeta Deshpande, Dan Pechi, Shree Thatte, Vladislav Lialin, Anna Rumshisky

Figure 1 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 2 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 3 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 4 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Viaarxiv icon

Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data

Add code
Bookmark button
Alert button
Apr 04, 2023
Vladislav Lialin, Stephen Rawls, David Chan, Shalini Ghosh, Anna Rumshisky, Wael Hamza

Figure 1 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 2 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 3 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 4 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Viaarxiv icon

Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning

Add code
Bookmark button
Alert button
Mar 29, 2023
Namrata Shivagunde, Vladislav Lialin, Anna Rumshisky

Figure 1 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 2 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 3 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 4 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Viaarxiv icon