System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection

Add code
May 10, 2025
Figure 1 for System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection
Figure 2 for System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection
Figure 3 for System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection
Figure 4 for System Prompt Poisoning: Persistent Attacks on Large Language Models Beyond User Injection

Share this with someone who'll enjoy it:

View paper onarxiv icon

Share this with someone who'll enjoy it: