Picture for Xuyang Liu

Xuyang Liu

IPCV: Information-Preserving Compression for MLLM Visual Encoders

Add code
Dec 21, 2025
Figure 1 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 2 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 3 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Figure 4 for IPCV: Information-Preserving Compression for MLLM Visual Encoders
Viaarxiv icon

Mixing Importance with Diversity: Joint Optimization for KV Cache Compression in Large Vision-Language Models

Add code
Oct 23, 2025
Viaarxiv icon

AI for Service: Proactive Assistance with AI Glasses

Add code
Oct 16, 2025
Viaarxiv icon

Shifting AI Efficiency From Model-Centric to Data-Centric Compression

Add code
May 25, 2025
Viaarxiv icon

Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models

Add code
May 20, 2025
Viaarxiv icon

Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models

Add code
Mar 15, 2025
Figure 1 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 2 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 3 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Figure 4 for Seeing Sarcasm Through Different Eyes: Analyzing Multimodal Sarcasm Perception in Large Vision-Language Models
Viaarxiv icon

Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration

Add code
Jan 09, 2025
Figure 1 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 2 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 3 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Figure 4 for Compression with Global Guidance: Towards Training-free High-Resolution MLLMs Acceleration
Viaarxiv icon

Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration

Add code
Nov 26, 2024
Figure 1 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 2 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 3 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Figure 4 for Rethinking Token Reduction in MLLMs: Towards a Unified Paradigm for Training-Free Acceleration
Viaarxiv icon

Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models

Add code
Oct 29, 2024
Figure 1 for Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models
Figure 2 for Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models
Figure 3 for Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models
Figure 4 for Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models
Viaarxiv icon

Accelerating Diffusion Transformers with Token-wise Feature Caching

Add code
Oct 14, 2024
Figure 1 for Accelerating Diffusion Transformers with Token-wise Feature Caching
Figure 2 for Accelerating Diffusion Transformers with Token-wise Feature Caching
Figure 3 for Accelerating Diffusion Transformers with Token-wise Feature Caching
Figure 4 for Accelerating Diffusion Transformers with Token-wise Feature Caching
Viaarxiv icon