Alert button
Picture for Alisa Liu

Alisa Liu

Alert button

A Taxonomy of Ambiguity Types for NLP

Add code
Bookmark button
Alert button
Mar 21, 2024
Margaret Y. Li, Alisa Liu, Zhaofeng Wu, Noah A. Smith

Figure 1 for A Taxonomy of Ambiguity Types for NLP
Viaarxiv icon

Tuning Language Models by Proxy

Add code
Bookmark button
Alert button
Jan 16, 2024
Alisa Liu, Xiaochuang Han, Yizhong Wang, Yulia Tsvetkov, Yejin Choi, Noah A. Smith

Viaarxiv icon

That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?

Add code
Bookmark button
Alert button
Oct 23, 2023
Jaechan Lee, Alisa Liu, Orevaoghene Ahia, Hila Gonen, Noah A. Smith

Figure 1 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 2 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 3 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 4 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Bookmark button
Alert button
Jun 15, 2023
Ian R. McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, Andrew Gritsevskiy, Daniel Wurgaft, Derik Kauffman, Gabriel Recchia, Jiacheng Liu, Joe Cavanagh, Max Weiss, Sicong Huang, The Floating Droid, Tom Tseng, Tomasz Korbak, Xudong Shen, Yuhui Zhang, Zhengping Zhou, Najoung Kim, Samuel R. Bowman, Ethan Perez

Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

How Language Model Hallucinations Can Snowball

Add code
Bookmark button
Alert button
May 22, 2023
Muru Zhang, Ofir Press, William Merrill, Alisa Liu, Noah A. Smith

Figure 1 for How Language Model Hallucinations Can Snowball
Figure 2 for How Language Model Hallucinations Can Snowball
Figure 3 for How Language Model Hallucinations Can Snowball
Figure 4 for How Language Model Hallucinations Can Snowball
Viaarxiv icon

We're Afraid Language Models Aren't Modeling Ambiguity

Add code
Bookmark button
Alert button
Apr 27, 2023
Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

Figure 1 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 2 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 3 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 4 for We're Afraid Language Models Aren't Modeling Ambiguity
Viaarxiv icon

Self-Instruct: Aligning Language Model with Self Generated Instructions

Add code
Bookmark button
Alert button
Dec 20, 2022
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi

Figure 1 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 2 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 3 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 4 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Viaarxiv icon

Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

Add code
Bookmark button
Alert button
Dec 20, 2022
Skyler Hallinan, Alisa Liu, Yejin Choi, Maarten Sap

Figure 1 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 2 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 3 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 4 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Viaarxiv icon

WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation

Add code
Bookmark button
Alert button
Jan 16, 2022
Alisa Liu, Swabha Swayamdipta, Noah A. Smith, Yejin Choi

Figure 1 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 2 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 3 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 4 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Viaarxiv icon

Generated Knowledge Prompting for Commonsense Reasoning

Add code
Bookmark button
Alert button
Oct 15, 2021
Jiacheng Liu, Alisa Liu, Ximing Lu, Sean Welleck, Peter West, Ronan Le Bras, Yejin Choi, Hannaneh Hajishirzi

Figure 1 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 2 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 3 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 4 for Generated Knowledge Prompting for Commonsense Reasoning
Viaarxiv icon