Picture for Ziwei Wang

Ziwei Wang

E2HiL: Entropy-Guided Sample Selection for Efficient Real-World Human-in-the-Loop Reinforcement Learning

Add code
Jan 27, 2026
Viaarxiv icon

RAICL: Retrieval-Augmented In-Context Learning for Vision-Language-Model Based EEG Seizure Detection

Add code
Jan 25, 2026
Viaarxiv icon

Backpropagation-Free Test-Time Adaptation for Lightweight EEG-Based Brain-Computer Interfaces

Add code
Jan 12, 2026
Viaarxiv icon

NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards

Add code
Nov 18, 2025
Figure 1 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 2 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 3 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Figure 4 for NORA-1.5: A Vision-Language-Action Model Trained using World Model- and Action-based Preference Rewards
Viaarxiv icon

History-Aware Reasoning for GUI Agents

Add code
Nov 12, 2025
Viaarxiv icon

ProBench: Benchmarking GUI Agents with Accurate Process Information

Add code
Nov 12, 2025
Viaarxiv icon

MAP-VLA: Memory-Augmented Prompting for Vision-Language-Action Model in Robotic Manipulation

Add code
Nov 12, 2025
Viaarxiv icon

10 Open Challenges Steering the Future of Vision-Language-Action Models

Add code
Nov 08, 2025
Viaarxiv icon

Towards Scalable Web Accessibility Audit with MLLMs as Copilots

Add code
Nov 05, 2025
Figure 1 for Towards Scalable Web Accessibility Audit with MLLMs as Copilots
Figure 2 for Towards Scalable Web Accessibility Audit with MLLMs as Copilots
Figure 3 for Towards Scalable Web Accessibility Audit with MLLMs as Copilots
Figure 4 for Towards Scalable Web Accessibility Audit with MLLMs as Copilots
Viaarxiv icon

SheetBrain: A Neuro-Symbolic Agent for Accurate Reasoning over Complex and Large Spreadsheets

Add code
Oct 22, 2025
Viaarxiv icon