Picture for Luke Zettlemoyer

Luke Zettlemoyer

University of Washington

Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models

Add code
Nov 07, 2024
Viaarxiv icon

The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models

Add code
Nov 06, 2024
Viaarxiv icon

Altogether: Image Captioning via Re-aligning Alt-text

Add code
Oct 22, 2024
Viaarxiv icon

Latent Action Pretraining from Videos

Add code
Oct 15, 2024
Figure 1 for Latent Action Pretraining from Videos
Figure 2 for Latent Action Pretraining from Videos
Figure 3 for Latent Action Pretraining from Videos
Figure 4 for Latent Action Pretraining from Videos
Viaarxiv icon

Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

Add code
Aug 20, 2024
Viaarxiv icon

Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models

Add code
Aug 12, 2024
Viaarxiv icon

Better Alignment with Instruction Back-and-Forth Translation

Add code
Aug 08, 2024
Figure 1 for Better Alignment with Instruction Back-and-Forth Translation
Figure 2 for Better Alignment with Instruction Back-and-Forth Translation
Figure 3 for Better Alignment with Instruction Back-and-Forth Translation
Figure 4 for Better Alignment with Instruction Back-and-Forth Translation
Viaarxiv icon

MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts

Add code
Jul 31, 2024
Viaarxiv icon

CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation

Add code
Jul 09, 2024
Viaarxiv icon

MUSE: Machine Unlearning Six-Way Evaluation for Language Models

Add code
Jul 08, 2024
Viaarxiv icon