Picture for Haibo Jin

Haibo Jin

Reasoning Can Hurt the Inductive Abilities of Large Language Models

Add code
May 30, 2025
Viaarxiv icon

From Hallucinations to Jailbreaks: Rethinking the Vulnerability of Large Foundation Models

Add code
May 30, 2025
Viaarxiv icon

Exploring the Vulnerability of the Content Moderation Guardrail in Large Language Models via Intent Manipulation

Add code
May 24, 2025
Viaarxiv icon

Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems

Add code
Mar 31, 2025
Viaarxiv icon

Revolve: Optimizing AI Systems by Tracking Response Evolution in Textual Optimization

Add code
Dec 04, 2024
Viaarxiv icon

Large Language Model with Region-guided Referring and Grounding for CT Report Generation

Add code
Nov 23, 2024
Viaarxiv icon

JailbreakZoo: Survey, Landscapes, and Horizons in Jailbreaking Large Language and Vision-Language Models

Add code
Jun 26, 2024
Viaarxiv icon

Jailbreaking Large Language Models Against Moderation Guardrails via Cipher Characters

Add code
May 30, 2024
Viaarxiv icon

Design as Desired: Utilizing Visual Question Answering for Multimodal Pre-training

Add code
Apr 08, 2024
Viaarxiv icon

Rethinking Self-training for Semi-supervised Landmark Detection: A Selection-free Approach

Add code
Apr 06, 2024
Figure 1 for Rethinking Self-training for Semi-supervised Landmark Detection: A Selection-free Approach
Figure 2 for Rethinking Self-training for Semi-supervised Landmark Detection: A Selection-free Approach
Figure 3 for Rethinking Self-training for Semi-supervised Landmark Detection: A Selection-free Approach
Figure 4 for Rethinking Self-training for Semi-supervised Landmark Detection: A Selection-free Approach
Viaarxiv icon