Picture for Shan Yu

Shan Yu

Brain-Inspired Graph Multi-Agent Systems for LLM Reasoning

Add code
Mar 16, 2026
Viaarxiv icon

StegaFFD: Privacy-Preserving Face Forgery Detection via Fine-Grained Steganographic Domain Lifting

Add code
Mar 03, 2026
Viaarxiv icon

Autoregressive Visual Decoding from EEG Signals

Add code
Feb 26, 2026
Viaarxiv icon

Orthogonal Weight Modification Enhances Learning Scalability and Convergence Efficiency without Gradient Backpropagation

Add code
Feb 25, 2026
Viaarxiv icon

GeneralVLA: Generalizable Vision-Language-Action Models with Knowledge-Guided Trajectory Planning

Add code
Feb 04, 2026
Viaarxiv icon

A neural network for modeling human concept formation, understanding and communication

Add code
Jan 05, 2026
Viaarxiv icon

Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test

Add code
May 28, 2025
Figure 1 for Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test
Figure 2 for Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test
Figure 3 for Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test
Figure 4 for Visual Large Language Models Exhibit Human-Level Cognitive Flexibility in the Wisconsin Card Sorting Test
Viaarxiv icon

Flexible Tool Selection through Low-dimensional Attribute Alignment of Vision and Language

Add code
May 28, 2025
Viaarxiv icon

Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving

Add code
May 06, 2025
Figure 1 for Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving
Figure 2 for Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving
Figure 3 for Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving
Figure 4 for Prism: Unleashing GPU Sharing for Cost-Efficient Multi-LLM Serving
Viaarxiv icon

ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving

Add code
Oct 02, 2024
Figure 1 for ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving
Figure 2 for ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving
Figure 3 for ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving
Figure 4 for ConServe: Harvesting GPUs for Low-Latency and High-Throughput Large Language Model Serving
Viaarxiv icon