Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference

Add code
Nov 18, 2022

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: