Alert button
Picture for Noah A. Smith

Noah A. Smith

Alert button

One Embedder, Any Task: Instruction-Finetuned Text Embeddings

Add code
Bookmark button
Alert button
Dec 20, 2022
Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu

Figure 1 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 2 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 3 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Figure 4 for One Embedder, Any Task: Instruction-Finetuned Text Embeddings
Viaarxiv icon

Demystifying Prompts in Language Models via Perplexity Estimation

Add code
Bookmark button
Alert button
Dec 08, 2022
Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer

Figure 1 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 2 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 3 for Demystifying Prompts in Language Models via Perplexity Estimation
Figure 4 for Demystifying Prompts in Language Models via Perplexity Estimation
Viaarxiv icon

Data-Efficient Finetuning Using Cross-Task Nearest Neighbors

Add code
Bookmark button
Alert button
Dec 01, 2022
Hamish Ivison, Noah A. Smith, Hannaneh Hajishirzi, Pradeep Dasigi

Figure 1 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 2 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 3 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Figure 4 for Data-Efficient Finetuning Using Cross-Task Nearest Neighbors
Viaarxiv icon

Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning

Add code
Bookmark button
Alert button
Nov 30, 2022
Daniel Edmiston, Phillip Keung, Noah A. Smith

Figure 1 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 2 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 3 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Figure 4 for Domain Mismatch Doesn't Always Prevent Cross-Lingual Transfer Learning
Viaarxiv icon

PromptCap: Prompt-Guided Task-Aware Image Captioning

Add code
Bookmark button
Alert button
Nov 15, 2022
Yushi Hu, Hang Hua, Zhengyuan Yang, Weijia Shi, Noah A. Smith, Jiebo Luo

Figure 1 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 2 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 3 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Figure 4 for PromptCap: Prompt-Guided Task-Aware Image Captioning
Viaarxiv icon

How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Add code
Bookmark button
Alert button
Nov 07, 2022
Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz

Figure 1 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 2 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 3 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Figure 4 for How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers
Viaarxiv icon

Modeling Context With Linear Attention for Scalable Document-Level Translation

Add code
Bookmark button
Alert button
Oct 16, 2022
Zhaofeng Wu, Hao Peng, Nikolaos Pappas, Noah A. Smith

Figure 1 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 2 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 3 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Figure 4 for Modeling Context With Linear Attention for Scalable Document-Level Translation
Viaarxiv icon

Transparency Helps Reveal When Language Models Learn Meaning

Add code
Bookmark button
Alert button
Oct 14, 2022
Zhaofeng Wu, William Merrill, Hao Peng, Iz Beltagy, Noah A. Smith

Figure 1 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 2 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 3 for Transparency Helps Reveal When Language Models Learn Meaning
Figure 4 for Transparency Helps Reveal When Language Models Learn Meaning
Viaarxiv icon

Measuring and Narrowing the Compositionality Gap in Language Models

Add code
Bookmark button
Alert button
Oct 07, 2022
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis

Figure 1 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 2 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 3 for Measuring and Narrowing the Compositionality Gap in Language Models
Figure 4 for Measuring and Narrowing the Compositionality Gap in Language Models
Viaarxiv icon

Binding Language Models in Symbolic Languages

Add code
Bookmark button
Alert button
Oct 06, 2022
Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu

Figure 1 for Binding Language Models in Symbolic Languages
Figure 2 for Binding Language Models in Symbolic Languages
Figure 3 for Binding Language Models in Symbolic Languages
Figure 4 for Binding Language Models in Symbolic Languages
Viaarxiv icon