Deit


Local Scale Equivariance with Latent Deep Equilibrium Canonicalizer

Add code
Aug 19, 2025
Viaarxiv icon

Calibration Attention: Instance-wise Temperature Scaling for Vision Transformers

Add code
Aug 12, 2025
Viaarxiv icon

Variance-Based Pruning for Accelerating and Compressing Trained Networks

Add code
Jul 17, 2025
Viaarxiv icon

Frequency-Dynamic Attention Modulation for Dense Prediction

Add code
Jul 16, 2025
Viaarxiv icon

Block-based Symmetric Pruning and Fusion for Efficient Vision Transformers

Add code
Jul 16, 2025
Viaarxiv icon

DART: Differentiable Dynamic Adaptive Region Tokenizer for Vision Transformer and Mamba

Add code
Jun 12, 2025
Viaarxiv icon

Token Transforming: A Unified and Training-Free Token Compression Framework for Vision Transformer Acceleration

Add code
Jun 06, 2025
Viaarxiv icon

Is Attention Required for Transformer Inference? Explore Function-preserving Attention Replacement

Add code
May 29, 2025
Viaarxiv icon

Stronger ViTs With Octic Equivariance

Add code
May 21, 2025
Viaarxiv icon

Lossless Token Merging Even Without Fine-Tuning in Vision Transformers

Add code
May 21, 2025
Viaarxiv icon