Picture for Luke Zettlemoyer

Luke Zettlemoyer

University of Washington

Prompting Language Models for Linguistic Structure

Add code
Nov 15, 2022
Figure 1 for Prompting Language Models for Linguistic Structure
Figure 2 for Prompting Language Models for Linguistic Structure
Figure 3 for Prompting Language Models for Linguistic Structure
Figure 4 for Prompting Language Models for Linguistic Structure
Viaarxiv icon

Contrastive Decoding: Open-ended Text Generation as Optimization

Add code
Oct 27, 2022
Viaarxiv icon

RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering

Add code
Oct 25, 2022
Viaarxiv icon

M2D2: A Massively Multi-domain Language Modeling Dataset

Add code
Oct 13, 2022
Figure 1 for M2D2: A Massively Multi-domain Language Modeling Dataset
Figure 2 for M2D2: A Massively Multi-domain Language Modeling Dataset
Figure 3 for M2D2: A Massively Multi-domain Language Modeling Dataset
Figure 4 for M2D2: A Massively Multi-domain Language Modeling Dataset
Viaarxiv icon

CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation

Add code
Oct 10, 2022
Figure 1 for CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Figure 2 for CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Figure 3 for CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Figure 4 for CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation
Viaarxiv icon

Binding Language Models in Symbolic Languages

Add code
Oct 06, 2022
Figure 1 for Binding Language Models in Symbolic Languages
Figure 2 for Binding Language Models in Symbolic Languages
Figure 3 for Binding Language Models in Symbolic Languages
Figure 4 for Binding Language Models in Symbolic Languages
Viaarxiv icon

Improving Policy Learning via Language Dynamics Distillation

Add code
Sep 30, 2022
Figure 1 for Improving Policy Learning via Language Dynamics Distillation
Figure 2 for Improving Policy Learning via Language Dynamics Distillation
Figure 3 for Improving Policy Learning via Language Dynamics Distillation
Figure 4 for Improving Policy Learning via Language Dynamics Distillation
Viaarxiv icon

Mega: Moving Average Equipped Gated Attention

Add code
Sep 26, 2022
Figure 1 for Mega: Moving Average Equipped Gated Attention
Figure 2 for Mega: Moving Average Equipped Gated Attention
Figure 3 for Mega: Moving Average Equipped Gated Attention
Figure 4 for Mega: Moving Average Equipped Gated Attention
Viaarxiv icon

Selective Annotation Makes Language Models Better Few-Shot Learners

Add code
Sep 05, 2022
Figure 1 for Selective Annotation Makes Language Models Better Few-Shot Learners
Figure 2 for Selective Annotation Makes Language Models Better Few-Shot Learners
Figure 3 for Selective Annotation Makes Language Models Better Few-Shot Learners
Figure 4 for Selective Annotation Makes Language Models Better Few-Shot Learners
Viaarxiv icon

LLM.int8: 8-bit Matrix Multiplication for Transformers at Scale

Add code
Aug 15, 2022
Figure 1 for LLM.int8: 8-bit Matrix Multiplication for Transformers at Scale
Figure 2 for LLM.int8: 8-bit Matrix Multiplication for Transformers at Scale
Figure 3 for LLM.int8: 8-bit Matrix Multiplication for Transformers at Scale
Figure 4 for LLM.int8: 8-bit Matrix Multiplication for Transformers at Scale
Viaarxiv icon