Picture for Song Han

Song Han

University of Connecticut

LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention

Add code
Feb 20, 2025
Viaarxiv icon

Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning

Add code
Feb 06, 2025
Figure 1 for Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
Figure 2 for Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
Figure 3 for Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
Figure 4 for Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning
Viaarxiv icon

Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity

Add code
Feb 03, 2025
Figure 1 for Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
Figure 2 for Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
Figure 3 for Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
Figure 4 for Sparse VideoGen: Accelerating Video Diffusion Transformers with Spatial-Temporal Sparsity
Viaarxiv icon

SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer

Add code
Jan 30, 2025
Figure 1 for SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
Figure 2 for SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
Figure 3 for SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
Figure 4 for SANA 1.5: Efficient Scaling of Training-Time and Inference-Time Compute in Linear Diffusion Transformer
Viaarxiv icon

GR-WiFi: A GNU Radio based WiFi Platform with Single-User and Multi-User MIMO Capability

Add code
Jan 10, 2025
Figure 1 for GR-WiFi: A GNU Radio based WiFi Platform with Single-User and Multi-User MIMO Capability
Figure 2 for GR-WiFi: A GNU Radio based WiFi Platform with Single-User and Multi-User MIMO Capability
Figure 3 for GR-WiFi: A GNU Radio based WiFi Platform with Single-User and Multi-User MIMO Capability
Figure 4 for GR-WiFi: A GNU Radio based WiFi Platform with Single-User and Multi-User MIMO Capability
Viaarxiv icon

NVILA: Efficient Frontier Visual Language Models

Add code
Dec 05, 2024
Figure 1 for NVILA: Efficient Frontier Visual Language Models
Figure 2 for NVILA: Efficient Frontier Visual Language Models
Figure 3 for NVILA: Efficient Frontier Visual Language Models
Figure 4 for NVILA: Efficient Frontier Visual Language Models
Viaarxiv icon

SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models

Add code
Nov 07, 2024
Figure 1 for SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Figure 2 for SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Figure 3 for SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Figure 4 for SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models
Viaarxiv icon

COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training

Add code
Oct 25, 2024
Figure 1 for COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
Figure 2 for COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
Figure 3 for COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
Figure 4 for COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training
Viaarxiv icon

SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers

Add code
Oct 15, 2024
Figure 1 for SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Figure 2 for SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Figure 3 for SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Figure 4 for SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformers
Viaarxiv icon

DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads

Add code
Oct 14, 2024
Figure 1 for DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
Figure 2 for DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
Figure 3 for DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
Figure 4 for DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
Viaarxiv icon