Picture for Lei Zhou

Lei Zhou

Green-Red Watermarking for Recommender Systems

Add code
Apr 26, 2026
Viaarxiv icon

OneVL: One-Step Latent Reasoning and Planning with Vision-Language Explanation

Add code
Apr 20, 2026
Viaarxiv icon

Learning 3D Reconstruction with Priors in Test Time

Add code
Apr 04, 2026
Viaarxiv icon

Let Your Image Move with Your Motion! -- Implicit Multi-Object Multi-Motion Transfer

Add code
Mar 01, 2026
Viaarxiv icon

Improving LLM Reasoning with Homophily-aware Structural and Semantic Text-Attributed Graph Compression

Add code
Jan 13, 2026
Viaarxiv icon

The RoboSense Challenge: Sense Anything, Navigate Anywhere, Adapt Across Platforms

Add code
Jan 08, 2026
Viaarxiv icon

Pinching Antenna-aided NOMA Systems with Internal Eavesdropping

Add code
Dec 25, 2025
Figure 1 for Pinching Antenna-aided NOMA Systems with Internal Eavesdropping
Figure 2 for Pinching Antenna-aided NOMA Systems with Internal Eavesdropping
Figure 3 for Pinching Antenna-aided NOMA Systems with Internal Eavesdropping
Figure 4 for Pinching Antenna-aided NOMA Systems with Internal Eavesdropping
Viaarxiv icon

Human or LLM as Standardized Patients? A Comparative Study for Medical Education

Add code
Nov 12, 2025
Viaarxiv icon

Performance Analysis of Wireless-Powered Pinching Antenna Systems

Add code
Nov 05, 2025
Figure 1 for Performance Analysis of Wireless-Powered Pinching Antenna Systems
Figure 2 for Performance Analysis of Wireless-Powered Pinching Antenna Systems
Figure 3 for Performance Analysis of Wireless-Powered Pinching Antenna Systems
Figure 4 for Performance Analysis of Wireless-Powered Pinching Antenna Systems
Viaarxiv icon

Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos

Add code
Oct 24, 2025
Figure 1 for Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
Figure 2 for Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
Figure 3 for Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
Figure 4 for Scalable Vision-Language-Action Model Pretraining for Robotic Manipulation with Real-Life Human Activity Videos
Viaarxiv icon