MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head

Add code
Jan 12, 2026

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: