Alert button
Picture for Siyuan Liang

Siyuan Liang

Alert button

Does Few-shot Learning Suffer from Backdoor Attacks?

Add code
Bookmark button
Alert button
Dec 31, 2023
Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao

Viaarxiv icon

Pre-trained Trojan Attacks for Visual Recognition

Add code
Bookmark button
Alert button
Dec 23, 2023
Aishan Liu, Xinwei Zhang, Yisong Xiao, Yuguang Zhou, Siyuan Liang, Jiakai Wang, Xianglong Liu, Xiaochun Cao, Dacheng Tao

Viaarxiv icon

SA-Attack: Improving Adversarial Transferability of Vision-Language Pre-training Models via Self-Augmentation

Add code
Bookmark button
Alert button
Dec 08, 2023
Bangyan He, Xiaojun Jia, Siyuan Liang, Tianrui Lou, Yang Liu, Xiaochun Cao

Viaarxiv icon

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

Add code
Bookmark button
Alert button
Nov 20, 2023
Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

Viaarxiv icon

Improving Adversarial Transferability by Stable Diffusion

Add code
Bookmark button
Alert button
Nov 18, 2023
Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang

Viaarxiv icon

Face Encryption via Frequency-Restricted Identity-Agnostic Attacks

Add code
Bookmark button
Alert button
Aug 25, 2023
Xin Dong, Rui Wang, Siyuan Liang, Aishan Liu, Lihua Jing

Figure 1 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 2 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 3 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Figure 4 for Face Encryption via Frequency-Restricted Identity-Agnostic Attacks
Viaarxiv icon

Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks

Add code
Bookmark button
Alert button
Aug 03, 2023
Jun Guo, Aishan Liu, Xingyu Zheng, Siyuan Liang, Yisong Xiao, Yichao Wu, Xianglong Liu

Figure 1 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 2 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 3 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Figure 4 for Isolation and Induction: Training Robust Deep Neural Networks against Model Stealing Attacks
Viaarxiv icon