Alert button
Picture for Qiuling Xu

Qiuling Xu

Alert button

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Bookmark button
Alert button
Nov 27, 2023
Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Viaarxiv icon

POSIT: Promotion of Semantic Item Tail via Adversarial Learning

Add code
Bookmark button
Alert button
Aug 07, 2023
Qiuling Xu, Pannaga Shivaswamy, Xiangyu Zhang

Figure 1 for POSIT: Promotion of Semantic Item Tail via Adversarial Learning
Figure 2 for POSIT: Promotion of Semantic Item Tail via Adversarial Learning
Figure 3 for POSIT: Promotion of Semantic Item Tail via Adversarial Learning
Figure 4 for POSIT: Promotion of Semantic Item Tail via Adversarial Learning
Viaarxiv icon

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Add code
Bookmark button
Alert button
Jan 16, 2023
Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, Qiuling Xu, Shiqing Ma, Xiangyu Zhang

Figure 1 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 2 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 3 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 4 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Bookmark button
Alert button
Oct 23, 2022
Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang

Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon

DECK: Model Hardening for Defending Pervasive Backdoors

Add code
Bookmark button
Alert button
Jun 18, 2022
Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, Qiuling Xu, Guangyu Shen, Xiangyu Zhang

Figure 1 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 2 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 3 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 4 for DECK: Model Hardening for Defending Pervasive Backdoors
Viaarxiv icon

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

Add code
Bookmark button
Alert button
Feb 11, 2022
Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, Xiangyu Zhang

Figure 1 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 2 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 3 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 4 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Viaarxiv icon

Backdoor Scanning for Deep Neural Networks through K-Arm Optimization

Add code
Bookmark button
Alert button
Feb 09, 2021
Guangyu Shen, Yingqi Liu, Guanhong Tao, Shengwei An, Qiuling Xu, Siyuan Cheng, Shiqing Ma, Xiangyu Zhang

Figure 1 for Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Figure 2 for Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Figure 3 for Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Figure 4 for Backdoor Scanning for Deep Neural Networks through K-Arm Optimization
Viaarxiv icon

Fundamental Limits of Adversarial Learning

Add code
Bookmark button
Alert button
Jul 01, 2020
Kevin Bello, Qiuling Xu, Jean Honorio

Figure 1 for Fundamental Limits of Adversarial Learning
Viaarxiv icon

D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack

Add code
Bookmark button
Alert button
Jun 12, 2020
Qiuling Xu, Guanhong Tao, Xiangyu Zhang

Figure 1 for D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Figure 2 for D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Figure 3 for D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Figure 4 for D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Viaarxiv icon

Towards Feature Space Adversarial Attack

Add code
Bookmark button
Alert button
Apr 26, 2020
Qiuling Xu, Guanhong Tao, Siyuan Cheng, Lin Tan, Xiangyu Zhang

Figure 1 for Towards Feature Space Adversarial Attack
Figure 2 for Towards Feature Space Adversarial Attack
Figure 3 for Towards Feature Space Adversarial Attack
Figure 4 for Towards Feature Space Adversarial Attack
Viaarxiv icon