Picture for Kyunghyun Cho

Kyunghyun Cho

Why Knowledge Distillation Works in Generative Models: A Minimal Working Explanation

Add code
May 19, 2025
Viaarxiv icon

Machine Learning: a Lecture Note

Add code
May 06, 2025
Viaarxiv icon

RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning

Add code
Apr 24, 2025
Viaarxiv icon

Black Box Causal Inference: Effect Estimation via Meta Prediction

Add code
Mar 07, 2025
Viaarxiv icon

An Overview of Large Language Models for Statisticians

Add code
Feb 25, 2025
Viaarxiv icon

Meta-Statistical Learning: Supervised Learning of Statistical Inference

Add code
Feb 19, 2025
Viaarxiv icon

NaturalReasoning: Reasoning in the Wild with 2.8M Challenging Questions

Add code
Feb 18, 2025
Viaarxiv icon

Cost-Efficient Continual Learning with Sufficient Exemplar Memory

Add code
Feb 11, 2025
Viaarxiv icon

Supervised Contrastive Block Disentanglement

Add code
Feb 11, 2025
Viaarxiv icon

The Geometry of Prompting: Unveiling Distinct Mechanisms of Task Adaptation in Language Models

Add code
Feb 11, 2025
Viaarxiv icon