SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba

Add code
Oct 06, 2025
Figure 1 for SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba
Figure 2 for SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba
Figure 3 for SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba
Figure 4 for SpikingMamba: Towards Energy-Efficient Large Language Models via Knowledge Distillation from Mamba

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: