On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration

Add code
Apr 24, 2025
Figure 1 for On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration
Figure 2 for On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration
Figure 3 for On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration
Figure 4 for On-Device Qwen2.5: Efficient LLM Inference with Model Compression and Hardware Acceleration

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: