Picture for Haibo Jin

Haibo Jin

Agent Primitives: Reusable Latent Building Blocks for Multi-Agent Systems

Add code
Feb 03, 2026
Viaarxiv icon

Controlling Output Rankings in Generative Engines for LLM-based Search

Add code
Feb 03, 2026
Viaarxiv icon

Now You Hear Me: Audio Narrative Attacks Against Large Audio-Language Models

Add code
Jan 30, 2026
Viaarxiv icon

Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization

Add code
Jan 08, 2026
Viaarxiv icon

GuardVal: Dynamic Large Language Model Jailbreak Evaluation for Comprehensive Safety Testing

Add code
Jul 10, 2025
Viaarxiv icon

InfoFlood: Jailbreaking Large Language Models with Information Overload

Add code
Jun 13, 2025
Figure 1 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 2 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 3 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Figure 4 for InfoFlood: Jailbreaking Large Language Models with Information Overload
Viaarxiv icon

From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models

Add code
May 30, 2025
Viaarxiv icon

Reasoning Can Hurt the Inductive Abilities of Large Language Models

Add code
May 30, 2025
Viaarxiv icon

Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation

Add code
May 24, 2025
Viaarxiv icon

Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems

Add code
Mar 31, 2025
Viaarxiv icon