Picture for Pengfei Su

Pengfei Su

University of California, Merced, USA

Adversarial Contrastive Learning for LLM Quantization Attacks

Add code
Jan 06, 2026
Viaarxiv icon

AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache

Add code
Oct 29, 2025
Figure 1 for AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
Figure 2 for AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
Figure 3 for AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
Figure 4 for AttnCache: Accelerating Self-Attention Inference for LLM Prefill via Attention Cache
Viaarxiv icon

NeuronMM: High-Performance Matrix Multiplication for LLM Inference on AWS Trainium

Add code
Oct 29, 2025
Viaarxiv icon