Picture for Sewon Min

Sewon Min

FlexOlmo: Open Language Models for Flexible Data Use

Add code
Jul 09, 2025
Viaarxiv icon

Frustratingly Simple Retrieval Improves Challenging, Reasoning-Intensive Benchmarks

Add code
Jul 02, 2025
Viaarxiv icon

Spurious Rewards: Rethinking Training Signals in RLVR

Add code
Jun 12, 2025
Viaarxiv icon

LEANN: A Low-Storage Vector Index

Add code
Jun 09, 2025
Viaarxiv icon

ReasonIR: Training Retrievers for Reasoning Tasks

Add code
Apr 29, 2025
Viaarxiv icon

Reasoning Models Can Be Effective Without Thinking

Add code
Apr 14, 2025
Viaarxiv icon

OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens

Add code
Apr 09, 2025
Viaarxiv icon

OLMoE: Open Mixture-of-Experts Language Models

Add code
Sep 03, 2024
Figure 1 for OLMoE: Open Mixture-of-Experts Language Models
Figure 2 for OLMoE: Open Mixture-of-Experts Language Models
Figure 3 for OLMoE: Open Mixture-of-Experts Language Models
Figure 4 for OLMoE: Open Mixture-of-Experts Language Models
Viaarxiv icon

CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation

Add code
Jul 09, 2024
Viaarxiv icon

Do Membership Inference Attacks Work on Large Language Models?

Add code
Feb 12, 2024
Viaarxiv icon