Picture for Wenbo Yu

Wenbo Yu

TraceSIR: A Multi-Agent Framework for Structured Analysis and Reporting of Agentic Execution Traces

Add code
Feb 28, 2026
Viaarxiv icon

RAVEL: Reasoning Agents for Validating and Evaluating LLM Text Synthesis

Add code
Feb 28, 2026
Viaarxiv icon

GeCo-SRT: Geometry-aware Continual Adaptation for Robotic Cross-Task Sim-to-Real Transfer

Add code
Feb 25, 2026
Viaarxiv icon

GLM-5: from Vibe Coding to Agentic Engineering

Add code
Feb 17, 2026
Viaarxiv icon

Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation

Add code
Jan 12, 2026
Viaarxiv icon

Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model

Add code
Oct 21, 2025
Figure 1 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 2 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 3 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Figure 4 for Every Step Evolves: Scaling Reinforcement Learning for Trillion-Scale Thinking Model
Viaarxiv icon

GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

Add code
Aug 08, 2025
Viaarxiv icon

Editable-DeepSC: Reliable Cross-Modal Semantic Communications for Facial Editing

Add code
Nov 24, 2024
Figure 1 for Editable-DeepSC: Reliable Cross-Modal Semantic Communications for Facial Editing
Figure 2 for Editable-DeepSC: Reliable Cross-Modal Semantic Communications for Facial Editing
Figure 3 for Editable-DeepSC: Reliable Cross-Modal Semantic Communications for Facial Editing
Figure 4 for Editable-DeepSC: Reliable Cross-Modal Semantic Communications for Facial Editing
Viaarxiv icon

MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense

Add code
Oct 07, 2024
Viaarxiv icon

One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models

Add code
Jun 08, 2024
Figure 1 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 2 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 3 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 4 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Viaarxiv icon