Picture for Haohan Wang

Haohan Wang

Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization

Add code
Jan 08, 2026
Viaarxiv icon

Multi-Turn Jailbreaking of Aligned LLMs via Lexical Anchor Tree Search

Add code
Jan 06, 2026
Viaarxiv icon

Synthetic Data-Driven Prompt Tuning for Financial QA over Tables and Documents

Add code
Nov 14, 2025
Viaarxiv icon

GenoMAS: A Multi-Agent Framework for Scientific Discovery via Code-Driven Gene Expression Analysis

Add code
Jul 28, 2025
Viaarxiv icon

GuardVal: Dynamic Large Language Model Jailbreak Evaluation for Comprehensive Safety Testing

Add code
Jul 10, 2025
Viaarxiv icon

Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making

Add code
Jun 13, 2025
Figure 1 for Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making
Figure 2 for Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making
Figure 3 for Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making
Figure 4 for Tracing LLM Reasoning Processes with Strategic Games: A Framework for Planning, Revision, and Resource-Constrained Decision Making
Viaarxiv icon

InfoFlood: Jailbreaking Large Language Models with Information Overload

Add code
Jun 13, 2025
Figure 1 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 2 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 3 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 4 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Viaarxiv icon

PREMISE: Scalable and Strategic Prompt Optimization for Efficient Mathematical Reasoning in Large Models

Add code
Jun 12, 2025
Viaarxiv icon

Reasoning Can Hurt the Inductive Abilities of Large Language Models

Add code
May 30, 2025
Viaarxiv icon

From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models

Add code
May 30, 2025
Viaarxiv icon