Picture for Vashisth Tiwari

Vashisth Tiwari

Energy Considerations of Large Language Model Inference and Efficiency Optimizations

Add code
Apr 24, 2025
Figure 1 for Energy Considerations of Large Language Model Inference and Efficiency Optimizations
Figure 2 for Energy Considerations of Large Language Model Inference and Efficiency Optimizations
Figure 3 for Energy Considerations of Large Language Model Inference and Efficiency Optimizations
Figure 4 for Energy Considerations of Large Language Model Inference and Efficiency Optimizations
Viaarxiv icon

MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding

Add code
Aug 21, 2024
Figure 1 for MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Figure 2 for MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Figure 3 for MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Figure 4 for MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Viaarxiv icon