Picture for Minwu Kim

Minwu Kim

Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning

Add code
May 20, 2025
Figure 1 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 2 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 3 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Figure 4 for Reinforcement Learning vs. Distillation: Understanding Accuracy and Capability in LLM Reasoning
Viaarxiv icon

Warm Up Before You Train: Unlocking General Reasoning in Resource-Constrained Settings

Add code
May 19, 2025
Viaarxiv icon

Mathematical Reasoning in Large Language Models: Assessing Logical and Arithmetic Errors across Wide Numerical Ranges

Add code
Feb 12, 2025
Viaarxiv icon