Picture for Murali Annavaram

Murali Annavaram

Efficient LLM Inference with I/O-Aware Partial KV Cache Recomputation

Add code
Nov 26, 2024
Viaarxiv icon

Characterizing Context Influence and Hallucination in Summarization

Add code
Oct 03, 2024
Viaarxiv icon

Adaptively Private Next-Token Prediction of Large Language Models

Add code
Oct 02, 2024
Viaarxiv icon

CADC: Encoding User-Item Interactions for Compressing Recommendation Model Training Data

Add code
Jul 11, 2024
Viaarxiv icon

Differentially Private Next-Token Prediction of Large Language Models

Add code
Apr 01, 2024
Figure 1 for Differentially Private Next-Token Prediction of Large Language Models
Figure 2 for Differentially Private Next-Token Prediction of Large Language Models
Figure 3 for Differentially Private Next-Token Prediction of Large Language Models
Figure 4 for Differentially Private Next-Token Prediction of Large Language Models
Viaarxiv icon

Ethos: Rectifying Language Models in Orthogonal Parameter Space

Add code
Apr 01, 2024
Figure 1 for Ethos: Rectifying Language Models in Orthogonal Parameter Space
Figure 2 for Ethos: Rectifying Language Models in Orthogonal Parameter Space
Figure 3 for Ethos: Rectifying Language Models in Orthogonal Parameter Space
Figure 4 for Ethos: Rectifying Language Models in Orthogonal Parameter Space
Viaarxiv icon

Edge Private Graph Neural Networks with Singular Value Perturbation

Add code
Mar 16, 2024
Viaarxiv icon

Differentially Private Knowledge Distillation via Synthetic Text Generation

Add code
Mar 01, 2024
Figure 1 for Differentially Private Knowledge Distillation via Synthetic Text Generation
Figure 2 for Differentially Private Knowledge Distillation via Synthetic Text Generation
Figure 3 for Differentially Private Knowledge Distillation via Synthetic Text Generation
Figure 4 for Differentially Private Knowledge Distillation via Synthetic Text Generation
Viaarxiv icon

Data Leakage via Access Patterns of Sparse Features in Deep Learning-based Recommendation Systems

Add code
Dec 12, 2022
Viaarxiv icon

MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference

Add code
Sep 27, 2022
Figure 1 for MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference
Figure 2 for MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference
Figure 3 for MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference
Figure 4 for MPC-Pipe: an Efficient Pipeline Scheme for Secure Multi-party Machine Learning Inference
Viaarxiv icon