Picture for Ayan Sengupta

Ayan Sengupta

From Images to Words: Efficient Cross-Modal Knowledge Distillation to Language Models from Black-box Teachers

Add code
Mar 11, 2026
Viaarxiv icon

Understanding the Physics of Key-Value Cache Compression for LLMs through Attention Dynamics

Add code
Mar 02, 2026
Viaarxiv icon

Value-Guided KV Compression for LLMs via Approximated CUR Decomposition

Add code
Sep 18, 2025
Viaarxiv icon

First Finish Search: Efficient Test-Time Scaling in Large Language Models

Add code
May 23, 2025
Viaarxiv icon

On the Generalization vs Fidelity Paradox in Knowledge Distillation

Add code
May 21, 2025
Viaarxiv icon

Position: Enough of Scaling LLMs! Lets Focus on Downscaling

Add code
May 05, 2025
Viaarxiv icon

Compression Laws for Large Language Models

Add code
Apr 06, 2025
Viaarxiv icon

How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines

Add code
Feb 17, 2025
Figure 1 for How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
Figure 2 for How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
Figure 3 for How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
Figure 4 for How to Upscale Neural Networks with Scaling Law? A Survey and Practical Guidelines
Viaarxiv icon

You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning

Add code
Jan 25, 2025
Figure 1 for You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
Figure 2 for You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
Figure 3 for You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
Figure 4 for You Only Prune Once: Designing Calibration-Free Model Compression With Policy Learning
Viaarxiv icon

Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation

Add code
Nov 07, 2024
Figure 1 for Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Figure 2 for Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Figure 3 for Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Figure 4 for Robust and Efficient Fine-tuning of LLMs with Bayesian Reparameterization of Low-Rank Adaptation
Viaarxiv icon