Picture for Zedian Shao

Zedian Shao

Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection

Add code
Apr 10, 2026
Viaarxiv icon

A Critical Evaluation of Defenses against Prompt Injection Attacks

Add code
May 23, 2025
Viaarxiv icon

EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents

Add code
May 16, 2025
Viaarxiv icon

Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment

Add code
Oct 18, 2024
Figure 1 for Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Figure 2 for Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Figure 3 for Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Figure 4 for Making LLMs Vulnerable to Prompt Injection via Poisoning Alignment
Viaarxiv icon

Automatically Generating Visual Hallucination Test Cases for Multimodal Large Language Models

Add code
Oct 15, 2024
Viaarxiv icon

Refusing Safe Prompts for Multi-modal Large Language Models

Add code
Jul 12, 2024
Viaarxiv icon