Alert button
Picture for Luke Zettlemoyer

Luke Zettlemoyer

Alert button

Few-shot Mining of Naturally Occurring Inputs and Outputs

Add code
Bookmark button
Alert button
May 09, 2022
Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer

Figure 1 for Few-shot Mining of Naturally Occurring Inputs and Outputs
Figure 2 for Few-shot Mining of Naturally Occurring Inputs and Outputs
Figure 3 for Few-shot Mining of Naturally Occurring Inputs and Outputs
Figure 4 for Few-shot Mining of Naturally Occurring Inputs and Outputs
Viaarxiv icon

OPT: Open Pre-trained Transformer Language Models

Add code
Bookmark button
Alert button
May 05, 2022
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer

Figure 1 for OPT: Open Pre-trained Transformer Language Models
Figure 2 for OPT: Open Pre-trained Transformer Language Models
Figure 3 for OPT: Open Pre-trained Transformer Language Models
Figure 4 for OPT: Open Pre-trained Transformer Language Models
Viaarxiv icon

Natural Language to Code Translation with Execution

Add code
Bookmark button
Alert button
Apr 25, 2022
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang

Figure 1 for Natural Language to Code Translation with Execution
Figure 2 for Natural Language to Code Translation with Execution
Figure 3 for Natural Language to Code Translation with Execution
Figure 4 for Natural Language to Code Translation with Execution
Viaarxiv icon

Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models

Add code
Bookmark button
Alert button
Apr 17, 2022
Terra Blevins, Luke Zettlemoyer

Figure 1 for Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models
Figure 2 for Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models
Figure 3 for Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models
Figure 4 for Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models
Viaarxiv icon

InCoder: A Generative Model for Code Infilling and Synthesis

Add code
Bookmark button
Alert button
Apr 17, 2022
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis

Figure 1 for InCoder: A Generative Model for Code Infilling and Synthesis
Figure 2 for InCoder: A Generative Model for Code Infilling and Synthesis
Figure 3 for InCoder: A Generative Model for Code Infilling and Synthesis
Figure 4 for InCoder: A Generative Model for Code Infilling and Synthesis
Viaarxiv icon

Improving Passage Retrieval with Zero-Shot Question Generation

Add code
Bookmark button
Alert button
Apr 15, 2022
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer

Figure 1 for Improving Passage Retrieval with Zero-Shot Question Generation
Figure 2 for Improving Passage Retrieval with Zero-Shot Question Generation
Figure 3 for Improving Passage Retrieval with Zero-Shot Question Generation
Figure 4 for Improving Passage Retrieval with Zero-Shot Question Generation
Viaarxiv icon

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

Add code
Bookmark button
Alert button
Apr 03, 2022
Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani

Figure 1 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 2 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 3 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 4 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Viaarxiv icon

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Add code
Bookmark button
Alert button
Feb 25, 2022
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer

Figure 1 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 2 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 3 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 4 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Viaarxiv icon

Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection

Add code
Bookmark button
Alert button
Jan 26, 2022
Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith

Figure 1 for Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Figure 2 for Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Figure 3 for Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Figure 4 for Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Viaarxiv icon