Picture for David H. Yang

David H. Yang

SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching

Add code
Apr 01, 2025
Figure 1 for SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
Figure 2 for SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
Figure 3 for SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
Figure 4 for SentenceKV: Efficient LLM Inference via Sentence-Level Semantic KV Caching
Viaarxiv icon