Picture for Murali Emani

Murali Emani

AI Benchmark Democratization and Carpentry

Add code
Dec 12, 2025
Viaarxiv icon

ImageNet-Think-250K: A Large-Scale Synthetic Dataset for Multimodal Reasoning for Vision Language Models

Add code
Oct 02, 2025
Viaarxiv icon

PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference

Add code
Sep 04, 2025
Figure 1 for PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference
Figure 2 for PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference
Figure 3 for PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference
Figure 4 for PagedEviction: Structured Block-wise KV Cache Pruning for Efficient Large Language Model Inference
Viaarxiv icon

MoE-Inference-Bench: Performance Evaluation of Mixture of Expert Large Language and Vision Models

Add code
Aug 24, 2025
Viaarxiv icon

LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models

Add code
Aug 17, 2025
Figure 1 for LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models
Figure 2 for LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models
Figure 3 for LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models
Figure 4 for LangVision-LoRA-NAS: Neural Architecture Search for Variable LoRA Rank in Vision Language Models
Viaarxiv icon

BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference

Add code
Feb 18, 2025
Figure 1 for BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference
Figure 2 for BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference
Figure 3 for BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference
Figure 4 for BaKlaVa -- Budgeted Allocation of KV cache for Long-context Inference
Viaarxiv icon

LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators

Add code
Oct 31, 2024
Figure 1 for LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Figure 2 for LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Figure 3 for LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Figure 4 for LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators
Viaarxiv icon

DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies

Add code
Oct 11, 2023
Figure 1 for DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies
Figure 2 for DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies
Figure 3 for DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies
Figure 4 for DeepSpeed4Science Initiative: Enabling Large-Scale Scientific Discovery through Sophisticated AI System Technologies
Viaarxiv icon

A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators

Add code
Oct 06, 2023
Figure 1 for A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators
Figure 2 for A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators
Figure 3 for A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators
Figure 4 for A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators
Viaarxiv icon

Data Race Detection Using Large Language Models

Add code
Aug 15, 2023
Figure 1 for Data Race Detection Using Large Language Models
Figure 2 for Data Race Detection Using Large Language Models
Figure 3 for Data Race Detection Using Large Language Models
Figure 4 for Data Race Detection Using Large Language Models
Viaarxiv icon