Picture for Yingchun Wang

Yingchun Wang

OpenRT: An Open-Source Red Teaming Framework for Multimodal LLMs

Add code
Jan 04, 2026
Viaarxiv icon

UniMark: Artificial Intelligence Generated Content Identification Toolkit

Add code
Dec 13, 2025
Viaarxiv icon

Evolve the Method, Not the Prompts: Evolutionary Synthesis of Jailbreak Attacks on LLMs

Add code
Nov 16, 2025
Viaarxiv icon

Beyond Correctness: Confidence-Aware Reward Modeling for Enhancing Large Language Model Reasoning

Add code
Nov 09, 2025
Viaarxiv icon

A Rigorous Benchmark with Multidimensional Evaluation for Deep Research Agents: From Answers to Reports

Add code
Oct 02, 2025
Viaarxiv icon

SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law

Add code
Jul 24, 2025
Figure 1 for SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law
Figure 2 for SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law
Figure 3 for SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law
Figure 4 for SafeWork-R1: Coevolving Safety and Intelligence under the AI-45$^{\circ}$ Law
Viaarxiv icon

JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models

Add code
May 26, 2025
Viaarxiv icon

SafeVid: Toward Safety Aligned Video Large Multimodal Models

Add code
May 17, 2025
Viaarxiv icon

IDMR: Towards Instance-Driven Precise Visual Correspondence in Multimodal Retrieval

Add code
Apr 01, 2025
Viaarxiv icon

A Mousetrap: Fooling Large Reasoning Models for Jailbreak with Chain of Iterative Chaos

Add code
Feb 19, 2025
Viaarxiv icon