Picture for Taesup Kim

Taesup Kim

ATAS: Any-to-Any Self-Distillation for Enhanced Open-Vocabulary Dense Prediction

Add code
Jun 10, 2025
Viaarxiv icon

From Threat to Tool: Leveraging Refusal-Aware Injection Attacks for Safety Alignment

Add code
Jun 07, 2025
Viaarxiv icon

Parameter-Efficient Fine-Tuning with Column Space Projection

Add code
May 26, 2025
Viaarxiv icon

Generalized and Personalized Federated Learning with Foundation Models via Orthogonal Transformations

Add code
May 26, 2025
Viaarxiv icon

Regularized Personalization of Text-to-Image Diffusion Models without Distributional Drift

Add code
May 26, 2025
Viaarxiv icon

Energy-based Preference Optimization for Test-time Adaptation

Add code
May 26, 2025
Viaarxiv icon

"Well, Keep Thinking": Enhancing LLM Reasoning with Adaptive Injection Decoding

Add code
Mar 13, 2025
Viaarxiv icon

Object-Centric World Model for Language-Guided Manipulation

Add code
Mar 08, 2025
Viaarxiv icon

Retaining and Enhancing Pre-trained Knowledge in Vision-Language Models with Prompt Ensembling

Add code
Dec 10, 2024
Viaarxiv icon

When Vision Models Meet Parameter Efficient Look-Aside Adapters Without Large-Scale Audio Pretraining

Add code
Dec 08, 2024
Figure 1 for When Vision Models Meet Parameter Efficient Look-Aside Adapters Without Large-Scale Audio Pretraining
Figure 2 for When Vision Models Meet Parameter Efficient Look-Aside Adapters Without Large-Scale Audio Pretraining
Figure 3 for When Vision Models Meet Parameter Efficient Look-Aside Adapters Without Large-Scale Audio Pretraining
Figure 4 for When Vision Models Meet Parameter Efficient Look-Aside Adapters Without Large-Scale Audio Pretraining
Viaarxiv icon