Picture for Yoon Kim

Yoon Kim

Richard

Learning to Interpret Weight Differences in Language Models

Add code
Oct 06, 2025
Viaarxiv icon

On the Same Wavelength? Evaluating Pragmatic Reasoning in Language Models across Broad Concepts

Add code
Sep 08, 2025
Viaarxiv icon

Beyond Binary Rewards: Training LMs to Reason About Their Uncertainty

Add code
Jul 22, 2025
Viaarxiv icon

Self-Adapting Language Models

Add code
Jun 12, 2025
Figure 1 for Self-Adapting Language Models
Figure 2 for Self-Adapting Language Models
Figure 3 for Self-Adapting Language Models
Figure 4 for Self-Adapting Language Models
Viaarxiv icon

Log-Linear Attention

Add code
Jun 05, 2025
Viaarxiv icon

FlashFormer: Whole-Model Kernels for Efficient Low-Batch Inference

Add code
May 28, 2025
Viaarxiv icon

PaTH Attention: Position Encoding via Accumulating Householder Transformations

Add code
May 22, 2025
Viaarxiv icon

Multimodal LLM Augmented Reasoning for Interpretable Visual Perception Analysis

Add code
Apr 16, 2025
Viaarxiv icon

reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs

Add code
Mar 14, 2025
Figure 1 for reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Figure 2 for reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Figure 3 for reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Figure 4 for reWordBench: Benchmarking and Improving the Robustness of Reward Models with Transformed Inputs
Viaarxiv icon

On the Duality between Gradient Transformations and Adapters

Add code
Feb 19, 2025
Viaarxiv icon