Alert button
Picture for Shengwei An

Shengwei An

Alert button

LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning

Add code
Bookmark button
Alert button
Mar 25, 2024
Siyuan Cheng, Guanhong Tao, Yingqi Liu, Guangyu Shen, Shengwei An, Shiwei Feng, Xiangzhe Xu, Kaiyuan Zhang, Shiqing Ma, Xiangyu Zhang

Figure 1 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 2 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 3 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Figure 4 for LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning
Viaarxiv icon

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

Add code
Bookmark button
Alert button
Feb 08, 2024
Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang

Viaarxiv icon

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Bookmark button
Alert button
Nov 27, 2023
Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Viaarxiv icon

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Add code
Bookmark button
Alert button
Jan 16, 2023
Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, Qiuling Xu, Shiqing Ma, Xiangyu Zhang

Figure 1 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 2 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 3 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 4 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Add code
Bookmark button
Alert button
Nov 29, 2022
Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang

Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Bookmark button
Alert button
Oct 23, 2022
Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang

Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer

Add code
Bookmark button
Alert button
Aug 13, 2022
Tong Wang, Yuan Yao, Feng Xu, Miao Xu, Shengwei An, Ting Wang

Figure 1 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 2 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 3 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Figure 4 for Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer
Viaarxiv icon

DECK: Model Hardening for Defending Pervasive Backdoors

Add code
Bookmark button
Alert button
Jun 18, 2022
Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, Qiuling Xu, Guangyu Shen, Xiangyu Zhang

Figure 1 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 2 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 3 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 4 for DECK: Model Hardening for Defending Pervasive Backdoors
Viaarxiv icon

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

Add code
Bookmark button
Alert button
Feb 11, 2022
Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, Xiangyu Zhang

Figure 1 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 2 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 3 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 4 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Viaarxiv icon

Backdoor Attack through Frequency Domain

Add code
Bookmark button
Alert button
Nov 30, 2021
Tong Wang, Yuan Yao, Feng Xu, Shengwei An, Hanghang Tong, Ting Wang

Figure 1 for Backdoor Attack through Frequency Domain
Figure 2 for Backdoor Attack through Frequency Domain
Figure 3 for Backdoor Attack through Frequency Domain
Figure 4 for Backdoor Attack through Frequency Domain
Viaarxiv icon