Picture for Sujian Li

Sujian Li

PaperBanana: Automating Academic Illustration for AI Scientists

Add code
Jan 30, 2026
Viaarxiv icon

JudgeRLVR: Judge First, Generate Second for Efficient Reasoning

Add code
Jan 13, 2026
Viaarxiv icon

DocLens : A Tool-Augmented Multi-Agent Framework for Long Visual Document Understanding

Add code
Nov 14, 2025
Viaarxiv icon

FinRAGBench-V: A Benchmark for Multimodal RAG with Visual Citation in the Financial Domain

Add code
May 23, 2025
Viaarxiv icon

KNN-SSD: Enabling Dynamic Self-Speculative Decoding via Nearest Neighbor Layer Set Optimization

Add code
May 22, 2025
Viaarxiv icon

MPO: Boosting LLM Agents with Meta Plan Optimization

Add code
Mar 04, 2025
Figure 1 for MPO: Boosting LLM Agents with Meta Plan Optimization
Figure 2 for MPO: Boosting LLM Agents with Meta Plan Optimization
Figure 3 for MPO: Boosting LLM Agents with Meta Plan Optimization
Figure 4 for MPO: Boosting LLM Agents with Meta Plan Optimization
Viaarxiv icon

Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision

Add code
Feb 28, 2025
Figure 1 for Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision
Figure 2 for Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision
Figure 3 for Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision
Figure 4 for Chain-of-Thought Matters: Improving Long-Context Language Models with Reasoning Path Supervision
Viaarxiv icon

LongAttn: Selecting Long-context Training Data via Token-level Attention

Add code
Feb 24, 2025
Figure 1 for LongAttn: Selecting Long-context Training Data via Token-level Attention
Figure 2 for LongAttn: Selecting Long-context Training Data via Token-level Attention
Figure 3 for LongAttn: Selecting Long-context Training Data via Token-level Attention
Figure 4 for LongAttn: Selecting Long-context Training Data via Token-level Attention
Viaarxiv icon

More Tokens, Lower Precision: Towards the Optimal Token-Precision Trade-off in KV Cache Compression

Add code
Dec 17, 2024
Viaarxiv icon

VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models

Add code
Nov 26, 2024
Figure 1 for VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Figure 2 for VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Figure 3 for VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Figure 4 for VLRewardBench: A Challenging Benchmark for Vision-Language Generative Reward Models
Viaarxiv icon