Picture for Jaehyung Kim

Jaehyung Kim

Learning from the Undesirable: Robust Adaptation of Language Models without Forgetting

Add code
Nov 17, 2025
Viaarxiv icon

TiTok: Transfer Token-level Knowledge via Contrastive Excess to Transplant LoRA

Add code
Oct 06, 2025
Viaarxiv icon

Fast and Fluent Diffusion Language Models via Convolutional Decoding and Rejective Fine-tuning

Add code
Sep 18, 2025
Viaarxiv icon

Towards an Introspective Dynamic Model of Globally Distributed Computing Infrastructures

Add code
Jun 24, 2025
Viaarxiv icon

Personalized LLM Decoding via Contrasting Personal Preference

Add code
Jun 13, 2025
Viaarxiv icon

Collaborative LLM Inference via Planning for Efficient Reasoning

Add code
Jun 13, 2025
Figure 1 for Collaborative LLM Inference via Planning for Efficient Reasoning
Figure 2 for Collaborative LLM Inference via Planning for Efficient Reasoning
Figure 3 for Collaborative LLM Inference via Planning for Efficient Reasoning
Figure 4 for Collaborative LLM Inference via Planning for Efficient Reasoning
Viaarxiv icon

Revisit What You See: Disclose Language Prior in Vision Tokens for Efficient Guided Decoding of LVLMs

Add code
Jun 11, 2025
Viaarxiv icon

LLMs Think, But Not In Your Flow: Reasoning-Level Personalization for Black-Box Large Language Models

Add code
May 28, 2025
Viaarxiv icon

Improving Chemical Understanding of LLMs via SMILES Parsing

Add code
May 22, 2025
Viaarxiv icon

Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs

Add code
Mar 07, 2025
Figure 1 for Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs
Figure 2 for Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs
Figure 3 for Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs
Figure 4 for Extracting and Emulsifying Cultural Explanation to Improve Multilingual Capability of LLMs
Viaarxiv icon