Picture for Zhonghui Zhang

Zhonghui Zhang

DART: Diffusion-Inspired Speculative Decoding for Fast LLM Inference

Add code
Jan 27, 2026
Viaarxiv icon

FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding

Add code
May 23, 2025
Figure 1 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 2 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 3 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Figure 4 for FlashForge: Ultra-Efficient Prefix-Aware Attention for LLM Decoding
Viaarxiv icon