Picture for Haohan Wang

Haohan Wang

From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models

Add code
May 30, 2025
Viaarxiv icon

Reasoning Can Hurt the Inductive Abilities of Large Language Models

Add code
May 30, 2025
Viaarxiv icon

SIPDO: Closed-Loop Prompt Optimization via Synthetic Data Feedback

Add code
May 26, 2025
Viaarxiv icon

Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation

Add code
May 24, 2025
Viaarxiv icon

Beamforming-Codebook-Aware Channel Knowledge Map Construction for Multi-Antenna Systems

Add code
May 22, 2025
Viaarxiv icon

Socratic Chart: Cooperating Multiple Agents for Robust SVG Chart Understanding

Add code
Apr 14, 2025
Viaarxiv icon

Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems

Add code
Mar 31, 2025
Viaarxiv icon

CrossWordBench: Evaluating the Reasoning Capabilities of LLMs and LVLMs with Controllable Puzzle Generation

Add code
Mar 30, 2025
Viaarxiv icon

IMPROVE: Iterative Model Pipeline Refinement and Optimization Leveraging LLM Agents

Add code
Feb 25, 2025
Viaarxiv icon

Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes

Add code
Jan 27, 2025
Figure 1 for Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes
Figure 2 for Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes
Figure 3 for Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes
Figure 4 for Examining Alignment of Large Language Models through Representative Heuristics: The Case of Political Stereotypes
Viaarxiv icon