Picture for Shiyu Chang

Shiyu Chang

Ares: Adaptive Reasoning Effort Selection for Efficient LLM Agents

Add code
Mar 09, 2026
Viaarxiv icon

RetouchIQ: MLLM Agents for Instruction-Based Image Retouching with Generalist Reward

Add code
Feb 19, 2026
Viaarxiv icon

Learning from Online Videos at Inference Time for Computer-Use Agents

Add code
Nov 06, 2025
Figure 1 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 2 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 3 for Learning from Online Videos at Inference Time for Computer-Use Agents
Figure 4 for Learning from Online Videos at Inference Time for Computer-Use Agents
Viaarxiv icon

Rethinking the Text-Vision Reasoning Imbalance in MLLMs through the Lens of Training Recipes

Add code
Oct 26, 2025
Viaarxiv icon

A Hierarchical Probabilistic Framework for Incremental Knowledge Tracing in Classroom Settings

Add code
Jun 11, 2025
Viaarxiv icon

Collision- and Reachability-Aware Multi-Robot Control with Grounded LLM Planners

Add code
May 26, 2025
Viaarxiv icon

Defending LLM Watermarking Against Spoofing Attacks with Contrastive Representation Learning

Add code
Apr 10, 2025
Viaarxiv icon

ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning

Add code
Apr 02, 2025
Figure 1 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 2 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 3 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Figure 4 for ThinkPrune: Pruning Long Chain-of-Thought of LLMs via Reinforcement Learning
Viaarxiv icon

KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse

Add code
Feb 21, 2025
Figure 1 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 2 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 3 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Figure 4 for KVLink: Accelerating Large Language Models via Efficient KV Cache Reuse
Viaarxiv icon

Instruction-Following Pruning for Large Language Models

Add code
Jan 07, 2025
Figure 1 for Instruction-Following Pruning for Large Language Models
Figure 2 for Instruction-Following Pruning for Large Language Models
Figure 3 for Instruction-Following Pruning for Large Language Models
Figure 4 for Instruction-Following Pruning for Large Language Models
Viaarxiv icon