Picture for Noah A. Smith

Noah A. Smith

Paul G. Allen School of Computer Science & Engineering, University of Washington, Allen Institute for Artificial Intelligence

Self-Instruct: Aligning Language Model with Self Generated Instructions

Add code
Dec 20, 2022
Figure 1 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 2 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 3 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Figure 4 for Self-Instruct: Aligning Language Model with Self Generated Instructions
Viaarxiv icon

Demystifying Prompts in Language Models via Perplexity Estimation

Add code
Dec 08, 2022
Figure 1 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 2 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 3 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 4 for Demystifying Prompts in Language Models via Perplexity Estimation
Viaarxiv icon

Data-Efficient Finetuning Using Cross-Task Nearest Neighbors

Add code
Dec 01, 2022
Figure 1 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 2 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 3 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 4 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Viaarxiv icon

Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning

Add code
Nov 30, 2022
Figure 1 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 2 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 3 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 4 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Viaarxiv icon

PromptCap: Prompt-Guided Task-Aware Image Captioning

Add code
Nov 15, 2022
Figure 1 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 2 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 3 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 4 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Viaarxiv icon

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Add code
Nov 07, 2022
Figure 1 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 2 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 3 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 4 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Viaarxiv icon

Modeling Context With Linear Attention for Scalable Document-Level Translation

Add code
Oct 16, 2022
Figure 1 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 2 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 3 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 4 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Viaarxiv icon

Transparency Helps Reveal When Language Models Learn Meaning

Add code
Oct 14, 2022
Figure 1 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 2 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 3 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 4 for Transparency Helps Reveal When Language Models Learn Meaning
Viaarxiv icon

Measuring and Narrowing the Compositionality Gap in Language Models

Add code
Oct 07, 2022
Figure 1 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 2 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 3 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 4 for Measuring and Narrowing the Compositionality Gap in Language Models
Viaarxiv icon

Binding Language Models in Symbolic Languages

Add code
Oct 06, 2022
Figure 1 for Binding Language Models in Symbolic Languages
Figure 2 for Binding Language Models in Symbolic Languages
Figure 3 for Binding Language Models in Symbolic Languages
Figure 4 for Binding Language Models in Symbolic Languages
Viaarxiv icon