Picture for Myung-Hoon Cha

Myung-Hoon Cha

ETRI, Daejeon, Republic of Korea

Cost-Efficient LLM Serving in the Cloud: VM Selection with KV Cache Offloading

Add code
Apr 16, 2025
Viaarxiv icon

Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs

Add code
Apr 16, 2025
Figure 1 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 2 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 3 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Figure 4 for Shared Disk KV Cache Management for Efficient Multi-Instance Inference in RAG-Powered LLMs
Viaarxiv icon