Picture for Hongning Wang

Hongning Wang

GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models

Add code
Aug 08, 2025
Viaarxiv icon

JPS: Jailbreak Multimodal Large Language Models with Collaborative Visual Perturbation and Textual Steering

Add code
Aug 07, 2025
Viaarxiv icon

Beyond Nash Equilibrium: Bounded Rationality of LLMs and humans in Strategic Decision-making

Add code
Jun 11, 2025
Viaarxiv icon

How Should We Enhance the Safety of Large Reasoning Models: An Empirical Study

Add code
May 21, 2025
Viaarxiv icon

Be Careful When Fine-tuning On Open-Source LLMs: Your Fine-tuning Data Could Be Secretly Stolen!

Add code
May 21, 2025
Viaarxiv icon

ShieldVLM: Safeguarding the Multimodal Implicit Toxicity via Deliberative Reasoning with LVLMs

Add code
May 20, 2025
Viaarxiv icon

Crisp: Cognitive Restructuring of Negative Thoughts through Multi-turn Supportive Dialogues

Add code
Apr 24, 2025
Viaarxiv icon

VPO: Aligning Text-to-Video Generation Models with Prompt Optimization

Add code
Mar 26, 2025
Viaarxiv icon

Intelligence Test

Add code
Feb 26, 2025
Viaarxiv icon

LongSafety: Evaluating Long-Context Safety of Large Language Models

Add code
Feb 24, 2025
Viaarxiv icon