Picture for Yucheng Li

Yucheng Li

SecurityLingua: Efficient Defense of LLM Jailbreak Attacks via Security-Aware Prompt Compression

Add code
Jun 15, 2025
Viaarxiv icon

R-KV: Redundancy-aware KV Cache Compression for Training-Free Reasoning Models Acceleration

Add code
May 30, 2025
Viaarxiv icon

Compressive Fourier-Domain Intensity Coupling (C-FOCUS) enables near-millimeter deep imaging in the intact mouse brain in vivo

Add code
May 27, 2025
Viaarxiv icon

MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention

Add code
Apr 22, 2025
Viaarxiv icon

LongEval: A Comprehensive Analysis of Long-Text Generation Through a Plan-based Paradigm

Add code
Feb 26, 2025
Viaarxiv icon

SCBench: A KV Cache-Centric Analysis of Long-Context Methods

Add code
Dec 13, 2024
Figure 1 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 2 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 3 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Figure 4 for SCBench: A KV Cache-Centric Analysis of Long-Context Methods
Viaarxiv icon

On the Rigour of Scientific Writing: Criteria, Analysis, and Insights

Add code
Oct 07, 2024
Viaarxiv icon

Data Contamination Report from the 2024 CONDA Shared Task

Add code
Jul 31, 2024
Figure 1 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 2 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 3 for Data Contamination Report from the 2024 CONDA Shared Task
Figure 4 for Data Contamination Report from the 2024 CONDA Shared Task
Viaarxiv icon

Fluorescence Diffraction Tomography using Explicit Neural Fields

Add code
Jul 23, 2024
Figure 1 for Fluorescence Diffraction Tomography using Explicit Neural Fields
Figure 2 for Fluorescence Diffraction Tomography using Explicit Neural Fields
Figure 3 for Fluorescence Diffraction Tomography using Explicit Neural Fields
Figure 4 for Fluorescence Diffraction Tomography using Explicit Neural Fields
Viaarxiv icon

MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

Add code
Jul 02, 2024
Figure 1 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 2 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 3 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Figure 4 for MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention
Viaarxiv icon