Picture for Yuxuan Zhang

Yuxuan Zhang

GLM-5: from Vibe Coding to Agentic Engineering

Add code
Feb 17, 2026
Viaarxiv icon

Discovering Semantic Latent Structures in Psychological Scales: A Response-Free Pathway to Efficient Simplification

Add code
Feb 13, 2026
Viaarxiv icon

Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining

Add code
Nov 19, 2025
Figure 1 for Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining
Figure 2 for Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining
Figure 3 for Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining
Figure 4 for Know Your Intent: An Autonomous Multi-Perspective LLM Agent Framework for DeFi User Transaction Intent Mining
Viaarxiv icon

A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports

Add code
Oct 02, 2025
Viaarxiv icon

LENS: Learning to Segment Anything with Unified Reinforced Reasoning

Add code
Aug 19, 2025
Viaarxiv icon

GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

Add code
Aug 08, 2025
Viaarxiv icon

Protenix-Mini: Efficient Structure Predictor via Compact Architecture, Few-Step Diffusion and Switchable pLM

Add code
Jul 16, 2025
Figure 1 for Protenix-Mini: Efficient Structure Predictor via Compact Architecture, Few-Step Diffusion and Switchable pLM
Figure 2 for Protenix-Mini: Efficient Structure Predictor via Compact Architecture, Few-Step Diffusion and Switchable pLM
Figure 3 for Protenix-Mini: Efficient Structure Predictor via Compact Architecture, Few-Step Diffusion and Switchable pLM
Figure 4 for Protenix-Mini: Efficient Structure Predictor via Compact Architecture, Few-Step Diffusion and Switchable pLM
Viaarxiv icon

General Modular Harness for LLM Agents in Multi-Turn Gaming Environments

Add code
Jul 15, 2025
Viaarxiv icon

Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model

Add code
Jul 10, 2025
Figure 1 for Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model
Figure 2 for Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model
Figure 3 for Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model
Figure 4 for Stable-Hair v2: Real-World Hair Transfer via Multiple-View Diffusion Model
Viaarxiv icon

GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning

Add code
Jul 02, 2025
Figure 1 for GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Figure 2 for GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Figure 3 for GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Figure 4 for GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning
Viaarxiv icon