Picture for Boju Chen

Boju Chen

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

Add code
Jun 21, 2024
Figure 1 for MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression
Figure 2 for MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression
Figure 3 for MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression
Figure 4 for MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression
Viaarxiv icon