Picture for Alisa Liu

Alisa Liu

A Taxonomy of Ambiguity Types for NLP

Mar 21, 2024
Figure 1 for A Taxonomy of Ambiguity Types for NLP
Viaarxiv icon

Tuning Language Models by Proxy

Add code
Jan 16, 2024
Figure 1 for Tuning Language Models by Proxy
Figure 2 for Tuning Language Models by Proxy
Figure 3 for Tuning Language Models by Proxy
Figure 4 for Tuning Language Models by Proxy
Viaarxiv icon

That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?

Add code
Oct 23, 2023
Figure 1 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 2 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 3 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Figure 4 for That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Jun 15, 2023
Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

How Language Model Hallucinations Can Snowball

Add code
May 22, 2023
Figure 1 for How Language Model Hallucinations Can Snowball
Figure 2 for How Language Model Hallucinations Can Snowball
Figure 3 for How Language Model Hallucinations Can Snowball
Figure 4 for How Language Model Hallucinations Can Snowball
Viaarxiv icon

We're Afraid Language Models Aren't Modeling Ambiguity

Add code
Apr 27, 2023
Figure 1 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 2 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 3 for We're Afraid Language Models Aren't Modeling Ambiguity
Figure 4 for We're Afraid Language Models Aren't Modeling Ambiguity
Viaarxiv icon

Self-Instruct: Aligning Language Model with Self Generated Instructions

Add code
Dec 20, 2022
Figure 1 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 2 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 3 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 4 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Viaarxiv icon

Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts

Add code
Dec 20, 2022
Figure 1 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 2 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 3 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Figure 4 for Detoxifying Text with MaRCo: Controllable Revision with Experts and Anti-Experts
Viaarxiv icon

WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation

Jan 16, 2022
Figure 1 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 2 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 3 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Figure 4 for WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation
Viaarxiv icon

Generated Knowledge Prompting for Commonsense Reasoning

Add code
Oct 15, 2021
Figure 1 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 2 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 3 for Generated Knowledge Prompting for Commonsense Reasoning
Figure 4 for Generated Knowledge Prompting for Commonsense Reasoning
Viaarxiv icon