Picture for Xufang Luo

Xufang Luo

A Comprehensive Information-Decomposition Analysis of Large Vision-Language Models

Add code
Mar 31, 2026
Viaarxiv icon

Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?

Add code
Mar 25, 2026
Viaarxiv icon

SortedRL: Accelerating RL Training for LLMs through Online Length-Aware Scheduling

Add code
Mar 24, 2026
Viaarxiv icon

Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty

Add code
Mar 16, 2026
Viaarxiv icon

Exploratory Memory-Augmented LLM Agent via Hybrid On- and Off-Policy Optimization

Add code
Feb 26, 2026
Viaarxiv icon

pMoE: Prompting Diverse Experts Together Wins More in Visual Adaptation

Add code
Feb 26, 2026
Viaarxiv icon

$ΔL$ Normalization: Rethink Loss Aggregation in RLVR

Add code
Sep 09, 2025
Viaarxiv icon

Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs

Add code
May 19, 2025
Viaarxiv icon

Zoomer: Adaptive Image Focus Optimization for Black-box MLLM

Add code
Apr 30, 2025
Figure 1 for Zoomer: Adaptive Image Focus Optimization for Black-box MLLM
Figure 2 for Zoomer: Adaptive Image Focus Optimization for Black-box MLLM
Figure 3 for Zoomer: Adaptive Image Focus Optimization for Black-box MLLM
Figure 4 for Zoomer: Adaptive Image Focus Optimization for Black-box MLLM
Viaarxiv icon

MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention

Add code
Apr 22, 2025
Figure 1 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 2 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 3 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Figure 4 for MMInference: Accelerating Pre-filling for Long-Context VLMs via Modality-Aware Permutation Sparse Attention
Viaarxiv icon