Alert button
Picture for Vladislav Lialin

Vladislav Lialin

Alert button

Emergent Abilities in Reduced-Scale Generative Language Models

Add code
Bookmark button
Alert button
Apr 02, 2024
Sherin Muckatira, Vijeta Deshpande, Vladislav Lialin, Anna Rumshisky

Viaarxiv icon

Deconstructing In-Context Learning: Understanding Prompts via Corruption

Add code
Bookmark button
Alert button
Apr 02, 2024
Namrata Shivagunde, Vladislav Lialin, Sherin Muckatira, Anna Rumshisky

Viaarxiv icon

Recent Advances, Applications, and Open Challenges in Machine Learning for Health: Reflections from Research Roundtables at ML4H 2023 Symposium

Add code
Bookmark button
Alert button
Mar 03, 2024
Hyewon Jeong, Sarah Jabbour, Yuzhe Yang, Rahul Thapta, Hussein Mozannar, William Jongwon Han, Nikita Mehandru, Michael Wornow, Vladislav Lialin, Xin Liu, Alejandro Lozano, Jiacheng Zhu, Rafal Dariusz Kocielnik, Keith Harrigian, Haoran Zhang, Edward Lee, Milos Vukadinovic, Aparna Balagopalan, Vincent Jeanselme, Katherine Matton, Ilker Demirel, Jason Fries, Parisa Rashidi, Brett Beaulieu-Jones, Xuhai Orson Xu, Matthew McDermott, Tristan Naumann, Monica Agrawal, Marinka Zitnik, Berk Ustun, Edward Choi, Kristen Yeom, Gamze Gursoy, Marzyeh Ghassemi, Emma Pierson, George Chen, Sanjat Kanjilal, Michael Oberst, Linying Zhang, Harvineet Singh, Tom Hartvigsen, Helen Zhou, Chinasa T. Okolo

Viaarxiv icon

Let's Reinforce Step by Step

Add code
Bookmark button
Alert button
Nov 10, 2023
Sarah Pan, Vladislav Lialin, Sherin Muckatira, Anna Rumshisky

Figure 1 for Let's Reinforce Step by Step
Figure 2 for Let's Reinforce Step by Step
Viaarxiv icon

Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

Add code
Bookmark button
Alert button
Jul 13, 2023
Vladislav Lialin, Namrata Shivagunde, Sherin Muckatira, Anna Rumshisky

Figure 1 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 2 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 3 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Figure 4 for Stack More Layers Differently: High-Rank Training Through Low-Rank Updates
Viaarxiv icon

Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale

Add code
Bookmark button
Alert button
May 30, 2023
Vijeta Deshpande, Dan Pechi, Shree Thatte, Vladislav Lialin, Anna Rumshisky

Figure 1 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 2 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 3 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Figure 4 for Honey, I Shrunk the Language: Language Model Behavior at Reduced Scale
Viaarxiv icon

Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data

Add code
Bookmark button
Alert button
Apr 04, 2023
Vladislav Lialin, Stephen Rawls, David Chan, Shalini Ghosh, Anna Rumshisky, Wael Hamza

Figure 1 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 2 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 3 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Figure 4 for Scalable and Accurate Self-supervised Multimodal Representation Learning without Aligned Video and Text Data
Viaarxiv icon

Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning

Add code
Bookmark button
Alert button
Mar 29, 2023
Namrata Shivagunde, Vladislav Lialin, Anna Rumshisky

Figure 1 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 2 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 3 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Figure 4 for Larger Probes Tell a Different Story: Extending Psycholinguistic Datasets Via In-Context Learning
Viaarxiv icon

Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning

Add code
Bookmark button
Alert button
Mar 28, 2023
Vladislav Lialin, Vijeta Deshpande, Anna Rumshisky

Figure 1 for Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Figure 2 for Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Figure 3 for Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Figure 4 for Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Viaarxiv icon