Alert button
Picture for Zhan Qin

Zhan Qin

Alert button

LLM-Guided Multi-View Hypergraph Learning for Human-Centric Explainable Recommendation

Add code
Bookmark button
Alert button
Jan 16, 2024
Zhixuan Chu, Yan Wang, Qing Cui, Longfei Li, Wenqing Chen, Sheng Li, Zhan Qin, Kui Ren

Viaarxiv icon

Certified Minimax Unlearning with Generalization Rates and Deletion Capacity

Add code
Bookmark button
Alert button
Dec 16, 2023
Jiaqi Liu, Jian Lou, Zhan Qin, Kui Ren

Viaarxiv icon

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

Add code
Bookmark button
Alert button
Dec 03, 2023
Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin

Viaarxiv icon

Pitfalls in Language Models for Code Intelligence: A Taxonomy and Survey

Add code
Bookmark button
Alert button
Oct 27, 2023
Xinyu She, Yue Liu, Yanjie Zhao, Yiling He, Li Li, Chakkrit Tantithamthavorn, Zhan Qin, Haoyu Wang

Viaarxiv icon

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

Add code
Bookmark button
Alert button
Oct 19, 2023
Hongwei Yao, Jian Lou, Zhan Qin

Figure 1 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 2 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 3 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 4 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Viaarxiv icon

SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution

Add code
Bookmark button
Alert button
Sep 25, 2023
Zhongjie Ba, Jieming Zhong, Jiachen Lei, Peng Cheng, Qinglong Wang, Zhan Qin, Zhibo Wang, Kui Ren

Figure 1 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 2 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 3 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Figure 4 for SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Viaarxiv icon

RemovalNet: DNN Fingerprint Removal Attacks

Add code
Bookmark button
Alert button
Aug 31, 2023
Hongwei Yao, Zheng Li, Kunzhe Huang, Jian Lou, Zhan Qin, Kui Ren

Figure 1 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 2 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 3 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 4 for RemovalNet: DNN Fingerprint Removal Attacks
Viaarxiv icon

FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis

Add code
Bookmark button
Alert button
Aug 10, 2023
Yiling He, Jian Lou, Zhan Qin, Kui Ren

Figure 1 for FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Figure 2 for FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Figure 3 for FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Figure 4 for FINER: Enhancing State-of-the-art Classifiers with Feature Attribution to Facilitate Security Analysis
Viaarxiv icon

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Add code
Bookmark button
Alert button
Jun 22, 2023
Hongwei Yao, Zheng Li, Haiqin Weng, Feng Xue, Kui Ren, Zhan Qin

Figure 1 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 2 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 3 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 4 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Viaarxiv icon