Alert button
Picture for Xiaogeng Liu

Xiaogeng Liu

Alert button

JailBreakV-28K: A Benchmark for Assessing the Robustness of MultiModal Large Language Models against Jailbreak Attacks

Add code
Bookmark button
Alert button
Apr 18, 2024
Weidi Luo, Siyuan Ma, Xiaogeng Liu, Xiaoyu Guo, Chaowei Xiao

Viaarxiv icon

Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models

Add code
Bookmark button
Alert button
Mar 26, 2024
Zhiyuan Yu, Xiaogeng Liu, Shunning Liang, Zach Cameron, Chaowei Xiao, Ning Zhang

Viaarxiv icon

AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting

Add code
Bookmark button
Alert button
Mar 14, 2024
Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao

Figure 1 for AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
Figure 2 for AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
Figure 3 for AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
Figure 4 for AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting
Viaarxiv icon

Automatic and Universal Prompt Injection Attacks against Large Language Models

Add code
Bookmark button
Alert button
Mar 07, 2024
Xiaogeng Liu, Zhiyuan Yu, Yizhe Zhang, Ning Zhang, Chaowei Xiao

Figure 1 for Automatic and Universal Prompt Injection Attacks against Large Language Models
Figure 2 for Automatic and Universal Prompt Injection Attacks against Large Language Models
Figure 3 for Automatic and Universal Prompt Injection Attacks against Large Language Models
Figure 4 for Automatic and Universal Prompt Injection Attacks against Large Language Models
Viaarxiv icon

DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions

Add code
Bookmark button
Alert button
Dec 12, 2023
Fangzhou Wu, Xiaogeng Liu, Chaowei Xiao

Figure 1 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 2 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 3 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Figure 4 for DeceptPrompt: Exploiting LLM-driven Code Generation via Adversarial Natural Language Instructions
Viaarxiv icon

AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models

Add code
Bookmark button
Alert button
Oct 03, 2023
Xiaogeng Liu, Nan Xu, Muhao Chen, Chaowei Xiao

Figure 1 for AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models
Figure 2 for AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models
Figure 3 for AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models
Figure 4 for AutoDAN: Generating Stealthy Jailbreak Prompts on Aligned Large Language Models
Viaarxiv icon

Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training

Add code
Bookmark button
Alert button
Jul 19, 2023
Yechao Zhang, Shengshan Hu, Leo Yu Zhang, Junyu Shi, Minghui Li, Xiaogeng Liu, Wei Wan, Hai Jin

Figure 1 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 2 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 3 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Figure 4 for Why Does Little Robustness Help? Understanding Adversarial Transferability From Surrogate Training
Viaarxiv icon

Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency

Add code
Bookmark button
Alert button
Mar 27, 2023
Xiaogeng Liu, Minghui Li, Haoyu Wang, Shengshan Hu, Dengpan Ye, Hai Jin, Libing Wu, Chaowei Xiao

Figure 1 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 2 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 3 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Figure 4 for Detecting Backdoors During the Inference Stage Based on Corruption Robustness Consistency
Viaarxiv icon

Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer

Add code
Bookmark button
Alert button
Mar 28, 2022
Shengshan Hu, Xiaogeng Liu, Yechao Zhang, Minghui Li, Leo Yu Zhang, Hai Jin, Libing Wu

Figure 1 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 2 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 3 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Figure 4 for Protecting Facial Privacy: Generating Adversarial Identity Masks via Style-robust Makeup Transfer
Viaarxiv icon