Picture for Pooria Taheri

Pooria Taheri

Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference

Add code
Feb 02, 2025
Figure 1 for Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
Figure 2 for Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
Figure 3 for Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
Figure 4 for Huff-LLM: End-to-End Lossless Compression for Efficient LLM Inference
Viaarxiv icon

The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks

Add code
Feb 08, 2023
Figure 1 for The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks
Figure 2 for The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks
Figure 3 for The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks
Figure 4 for The Hardware Impact of Quantization and Pruning for Weights in Spiking Neural Networks
Viaarxiv icon