Picture for Jinwoo Kim

Jinwoo Kim

Dept. of Computer Science and Engineering, Sogang University, Seoul, Republic of Korea

One-step Language Modeling via Continuous Denoising

Add code
Feb 18, 2026
Viaarxiv icon

Continuous Diffusion Models Can Obey Formal Syntax

Add code
Feb 12, 2026
Viaarxiv icon

Inverting Data Transformations via Diffusion Sampling

Add code
Feb 09, 2026
Viaarxiv icon

FiLoRA: Focus-and-Ignore LoRA for Controllable Feature Reliance

Add code
Feb 02, 2026
Viaarxiv icon

Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring

Add code
Nov 15, 2025
Figure 1 for Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring
Figure 2 for Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring
Figure 3 for Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring
Figure 4 for Near-Real-Time InSAR Phase Estimation for Large-Scale Surface Displacement Monitoring
Viaarxiv icon

Flock: A Knowledge Graph Foundation Model via Learning on Random Walks

Add code
Oct 01, 2025
Viaarxiv icon

ORIDa: Object-centric Real-world Image Composition Dataset

Add code
Jun 10, 2025
Figure 1 for ORIDa: Object-centric Real-world Image Composition Dataset
Figure 2 for ORIDa: Object-centric Real-world Image Composition Dataset
Figure 3 for ORIDa: Object-centric Real-world Image Composition Dataset
Figure 4 for ORIDa: Object-centric Real-world Image Composition Dataset
Viaarxiv icon

Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs

Add code
Apr 16, 2025
Figure 1 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 2 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 3 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 4 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Viaarxiv icon

Cost-Efficient LLM Serving in the Cloud: VM Selection with KV Cache Offloading

Add code
Apr 16, 2025
Viaarxiv icon

EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models

Add code
Apr 11, 2025
Figure 1 for EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models
Figure 2 for EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models
Figure 3 for EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models
Figure 4 for EO-VLM: VLM-Guided Energy Overload Attacks on Vision Models
Viaarxiv icon