Alert button
Picture for Kaiyuan Zhang

Kaiyuan Zhang

Alert button

LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning

Add code
Bookmark button
Alert button
Mar 25, 2024
Siyuan Cheng, Guanhong Tao, Yingqi Liu, Guangyu Shen, Shengwei An, Shiwei Feng, Xiangzhe Xu, Kaiyuan Zhang, Shiqing Ma, Xiangyu Zhang

Viaarxiv icon

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia

Add code
Bookmark button
Alert button
Feb 08, 2024
Guangyu Shen, Siyuan Cheng, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Lu Yan, Zhuo Zhang, Shiqing Ma, Xiangyu Zhang

Viaarxiv icon

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift

Add code
Bookmark button
Alert button
Nov 27, 2023
Shengwei An, Sheng-Yen Chou, Kaiyuan Zhang, Qiuling Xu, Guanhong Tao, Guangyu Shen, Siyuan Cheng, Shiqing Ma, Pin-Yu Chen, Tsung-Yi Ho, Xiangyu Zhang

Viaarxiv icon

GNN4EEG: A Benchmark and Toolkit for Electroencephalography Classification with Graph Neural Network

Add code
Bookmark button
Alert button
Sep 27, 2023
Kaiyuan Zhang, Ziyi Ye, Qingyao Ai, Xiaohui Xie, Yiqun Liu

Viaarxiv icon

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Add code
Bookmark button
Alert button
Aug 04, 2023
Lu Yan, Zhuo Zhang, Guanhong Tao, Kaiyuan Zhang, Xuan Chen, Guangyu Shen, Xiangyu Zhang

Figure 1 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 2 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 3 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 4 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Viaarxiv icon

Detecting Backdoors in Pre-trained Encoders

Add code
Bookmark button
Alert button
Mar 23, 2023
Shiwei Feng, Guanhong Tao, Siyuan Cheng, Guangyu Shen, Xiangzhe Xu, Yingqi Liu, Kaiyuan Zhang, Shiqing Ma, Xiangyu Zhang

Figure 1 for Detecting Backdoors in Pre-trained Encoders
Figure 2 for Detecting Backdoors in Pre-trained Encoders
Figure 3 for Detecting Backdoors in Pre-trained Encoders
Figure 4 for Detecting Backdoors in Pre-trained Encoders
Viaarxiv icon

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense

Add code
Bookmark button
Alert button
Jan 16, 2023
Siyuan Cheng, Guanhong Tao, Yingqi Liu, Shengwei An, Xiangzhe Xu, Shiwei Feng, Guangyu Shen, Kaiyuan Zhang, Qiuling Xu, Shiqing Ma, Xiangyu Zhang

Figure 1 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 2 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 3 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Figure 4 for BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense
Viaarxiv icon

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning

Add code
Bookmark button
Alert button
Oct 23, 2022
Kaiyuan Zhang, Guanhong Tao, Qiuling Xu, Siyuan Cheng, Shengwei An, Yingqi Liu, Shiwei Feng, Guangyu Shen, Pin-Yu Chen, Shiqing Ma, Xiangyu Zhang

Figure 1 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 2 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 3 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Figure 4 for FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Viaarxiv icon