Picture for Malcolm Chadwick

Malcolm Chadwick

EDiT: Efficient Diffusion Transformers with Linear Compressed Attention

Add code
Mar 20, 2025
Figure 1 for EDiT: Efficient Diffusion Transformers with Linear Compressed Attention
Figure 2 for EDiT: Efficient Diffusion Transformers with Linear Compressed Attention
Figure 3 for EDiT: Efficient Diffusion Transformers with Linear Compressed Attention
Figure 4 for EDiT: Efficient Diffusion Transformers with Linear Compressed Attention
Viaarxiv icon

Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities

Add code
Mar 14, 2025
Figure 1 for Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities
Figure 2 for Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities
Figure 3 for Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities
Figure 4 for Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities
Viaarxiv icon

Cross-Attention is all you need: Real-Time Streaming Transformers for Personalised Speech Enhancement

Add code
Nov 08, 2022
Viaarxiv icon