Picture for Luke Zettlemoyer

Luke Zettlemoyer

University of Washington

Improving Factuality with Explicit Working Memory

Add code
Dec 24, 2024
Figure 1 for Improving Factuality with Explicit Working Memory
Figure 2 for Improving Factuality with Explicit Working Memory
Figure 3 for Improving Factuality with Explicit Working Memory
Figure 4 for Improving Factuality with Explicit Working Memory
Viaarxiv icon

When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization

Add code
Dec 20, 2024
Figure 1 for When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization
Figure 2 for When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization
Figure 3 for When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization
Figure 4 for When Worse is Better: Navigating the compression-generation tradeoff in visual tokenization
Viaarxiv icon

LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation

Add code
Dec 19, 2024
Figure 1 for LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation
Figure 2 for LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation
Figure 3 for LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation
Figure 4 for LlamaFusion: Adapting Pretrained Language Models for Multimodal Generation
Viaarxiv icon

Byte Latent Transformer: Patches Scale Better Than Tokens

Add code
Dec 13, 2024
Viaarxiv icon

Memory Layers at Scale

Add code
Dec 12, 2024
Figure 1 for Memory Layers at Scale
Figure 2 for Memory Layers at Scale
Figure 3 for Memory Layers at Scale
Figure 4 for Memory Layers at Scale
Viaarxiv icon

ALMA: Alignment with Minimal Annotation

Add code
Dec 05, 2024
Figure 1 for ALMA: Alignment with Minimal Annotation
Figure 2 for ALMA: Alignment with Minimal Annotation
Figure 3 for ALMA: Alignment with Minimal Annotation
Figure 4 for ALMA: Alignment with Minimal Annotation
Viaarxiv icon

Negative Token Merging: Image-based Adversarial Feature Guidance

Add code
Dec 02, 2024
Figure 1 for Negative Token Merging: Image-based Adversarial Feature Guidance
Figure 2 for Negative Token Merging: Image-based Adversarial Feature Guidance
Figure 3 for Negative Token Merging: Image-based Adversarial Feature Guidance
Figure 4 for Negative Token Merging: Image-based Adversarial Feature Guidance
Viaarxiv icon

OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs

Add code
Nov 21, 2024
Figure 1 for OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs
Figure 2 for OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs
Figure 3 for OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs
Figure 4 for OpenScholar: Synthesizing Scientific Literature with Retrieval-augmented LMs
Viaarxiv icon

Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass

Add code
Nov 08, 2024
Figure 1 for Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Figure 2 for Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Figure 3 for Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Figure 4 for Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Viaarxiv icon

Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models

Add code
Nov 07, 2024
Figure 1 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 2 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 3 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Figure 4 for Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
Viaarxiv icon