Picture for Luke Zettlemoyer

Luke Zettlemoyer

University of Washington

Fantastic Copyrighted Beasts and How (Not) to Generate Them

Add code
Jun 20, 2024
Viaarxiv icon

DataComp-LM: In search of the next generation of training sets for language models

Add code
Jun 18, 2024
Figure 1 for DataComp-LM: In search of the next generation of training sets for language models
Figure 2 for DataComp-LM: In search of the next generation of training sets for language models
Figure 3 for DataComp-LM: In search of the next generation of training sets for language models
Figure 4 for DataComp-LM: In search of the next generation of training sets for language models
Viaarxiv icon

Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models

Add code
Jun 13, 2024
Figure 1 for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Figure 2 for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Figure 3 for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Figure 4 for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
Viaarxiv icon

Computational Tradeoffs in Image Synthesis: Diffusion, Masked-Token, and Next-Token Prediction

Add code
May 21, 2024
Viaarxiv icon

MoDE: CLIP Data Experts via Clustering

Add code
Apr 24, 2024
Figure 1 for MoDE: CLIP Data Experts via Clustering
Figure 2 for MoDE: CLIP Data Experts via Clustering
Figure 3 for MoDE: CLIP Data Experts via Clustering
Figure 4 for MoDE: CLIP Data Experts via Clustering
Viaarxiv icon

Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length

Add code
Apr 12, 2024
Figure 1 for Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Figure 2 for Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Figure 3 for Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Figure 4 for Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
Viaarxiv icon

MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling

Add code
Mar 15, 2024
Viaarxiv icon

Reliable, Adaptable, and Attributable Language Models with Retrieval

Add code
Mar 05, 2024
Figure 1 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 2 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 3 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 4 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Viaarxiv icon

Comparing Hallucination Detection Metrics for Multilingual Generation

Add code
Feb 16, 2024
Viaarxiv icon

Do Membership Inference Attacks Work on Large Language Models?

Add code
Feb 12, 2024
Viaarxiv icon