Picture for Wen-tau Yih

Wen-tau Yih

Learning to Reason for Factuality

Add code
Aug 07, 2025
Viaarxiv icon

MetaCLIP 2: A Worldwide Scaling Recipe

Add code
Jul 29, 2025
Viaarxiv icon

FlexOlmo: Open Language Models for Flexible Data Use

Add code
Jul 09, 2025
Viaarxiv icon

ConfQA: Answer Only If You Are Confident

Add code
Jun 08, 2025
Viaarxiv icon

ReasonIR: Training Retrievers for Reasoning Tasks

Add code
Apr 29, 2025
Viaarxiv icon

DRAMA: Diverse Augmentation from Large Language Models to Smaller Dense Retrievers

Add code
Feb 25, 2025
Viaarxiv icon

Data-Efficient Pretraining with Group-Level Data Influence Modeling

Add code
Feb 20, 2025
Viaarxiv icon

SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models

Add code
Feb 13, 2025
Figure 1 for SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
Figure 2 for SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
Figure 3 for SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
Figure 4 for SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models
Viaarxiv icon

Improving Factuality with Explicit Working Memory

Add code
Dec 24, 2024
Figure 1 for Improving Factuality with Explicit Working Memory
Figure 2 for Improving Factuality with Explicit Working Memory
Figure 3 for Improving Factuality with Explicit Working Memory
Figure 4 for Improving Factuality with Explicit Working Memory
Viaarxiv icon

Memory Layers at Scale

Add code
Dec 12, 2024
Figure 1 for Memory Layers at Scale
Figure 2 for Memory Layers at Scale
Figure 3 for Memory Layers at Scale
Figure 4 for Memory Layers at Scale
Viaarxiv icon