Alert button
Picture for Yun Shen

Yun Shen

Alert button

Rapid Adoption, Hidden Risks: The Dual Impact of Large Language Model Customization

Add code
Bookmark button
Alert button
Feb 15, 2024
Rui Zhang, Hongwei Li, Rui Wen, Wenbo Jiang, Yuan Zhang, Michael Backes, Yun Shen, Yang Zhang

Viaarxiv icon

FAKEPCD: Fake Point Cloud Detection via Source Attribution

Add code
Bookmark button
Alert button
Dec 18, 2023
Yiting Qu, Zhikun Zhang, Yun Shen, Michael Backes, Yang Zhang

Viaarxiv icon

Composite Backdoor Attacks Against Large Language Models

Add code
Bookmark button
Alert button
Oct 11, 2023
Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Composite Backdoor Attacks Against Large Language Models
Figure 2 for Composite Backdoor Attacks Against Large Language Models
Figure 3 for Composite Backdoor Attacks Against Large Language Models
Figure 4 for Composite Backdoor Attacks Against Large Language Models
Viaarxiv icon

Prompt Backdoors in Visual Prompt Learning

Add code
Bookmark button
Alert button
Oct 11, 2023
Hai Huang, Zhengyu Zhao, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Prompt Backdoors in Visual Prompt Learning
Figure 2 for Prompt Backdoors in Visual Prompt Learning
Figure 3 for Prompt Backdoors in Visual Prompt Learning
Figure 4 for Prompt Backdoors in Visual Prompt Learning
Viaarxiv icon

You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content

Add code
Bookmark button
Alert button
Aug 10, 2023
Xinlei He, Savvas Zannettou, Yun Shen, Yang Zhang

Figure 1 for You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Figure 2 for You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Figure 3 for You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Figure 4 for You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Viaarxiv icon

"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

Add code
Bookmark button
Alert button
Aug 07, 2023
Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Figure 2 for "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Figure 3 for "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Figure 4 for "Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models
Viaarxiv icon

Generated Graph Detection

Add code
Bookmark button
Alert button
Jun 13, 2023
Yihan Ma, Zhikun Zhang, Ning Yu, Xinlei He, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Generated Graph Detection
Figure 2 for Generated Graph Detection
Figure 3 for Generated Graph Detection
Figure 4 for Generated Graph Detection
Viaarxiv icon

A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots

Add code
Bookmark button
Alert button
Feb 23, 2023
Boyang Zhang, Xinlei He, Yun Shen, Tianhao Wang, Yang Zhang

Figure 1 for A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Figure 2 for A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Figure 3 for A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Figure 4 for A Plot is Worth a Thousand Words: Model Information Stealing Attacks via Scientific Plots
Viaarxiv icon

Backdoor Attacks Against Dataset Distillation

Add code
Bookmark button
Alert button
Jan 03, 2023
Yugeng Liu, Zheng Li, Michael Backes, Yun Shen, Yang Zhang

Figure 1 for Backdoor Attacks Against Dataset Distillation
Figure 2 for Backdoor Attacks Against Dataset Distillation
Figure 3 for Backdoor Attacks Against Dataset Distillation
Figure 4 for Backdoor Attacks Against Dataset Distillation
Viaarxiv icon

Amplifying Membership Exposure via Data Poisoning

Add code
Bookmark button
Alert button
Nov 01, 2022
Yufei Chen, Chao Shen, Yun Shen, Cong Wang, Yang Zhang

Figure 1 for Amplifying Membership Exposure via Data Poisoning
Figure 2 for Amplifying Membership Exposure via Data Poisoning
Figure 3 for Amplifying Membership Exposure via Data Poisoning
Figure 4 for Amplifying Membership Exposure via Data Poisoning
Viaarxiv icon