Alert button
Picture for Haibin Zheng

Haibin Zheng

Alert button

CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space

Add code
Bookmark button
Alert button
Jul 18, 2023
Haibin Zheng, Jinyin Chen, Haibo Jin

Figure 1 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 2 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 3 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 4 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Viaarxiv icon

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

Add code
Bookmark button
Alert button
Mar 25, 2023
Ruoxi Chen, Haibo Jin, Jinyin Chen, Haibin Zheng

Figure 1 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 2 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 3 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 4 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Viaarxiv icon

Edge Deep Learning Model Protection via Neuron Authorization

Add code
Bookmark button
Alert button
Mar 23, 2023
Jinyin Chen, Haibin Zheng, Tao Liu, Rongchang Li, Yao Cheng, Xuhong Zhang, Shouling Ji

Figure 1 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 2 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 3 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 4 for Edge Deep Learning Model Protection via Neuron Authorization
Viaarxiv icon

FedRight: An Effective Model Copyright Protection for Federated Learning

Add code
Bookmark button
Alert button
Mar 18, 2023
Jinyin Chen, Mingjun Li, Mingjun Li, Haibin Zheng

Figure 1 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 2 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 3 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 4 for FedRight: An Effective Model Copyright Protection for Federated Learning
Viaarxiv icon

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

Add code
Bookmark button
Alert button
Oct 25, 2022
Haibin Zheng, Haiyang Xiong, Jinyin Chen, Haonan Ma, Guohan Huang

Figure 1 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 2 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 3 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 4 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Viaarxiv icon

Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection

Add code
Bookmark button
Alert button
Aug 14, 2022
Haibin Zheng, Haiyang Xiong, Haonan Ma, Guohan Huang, Jinyin Chen

Figure 1 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 2 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 3 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 4 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Viaarxiv icon

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

Add code
Bookmark button
Alert button
Jun 17, 2022
Jinyin Chen, Chengyu Jia, Haibin Zheng, Ruoxi Chen, Chenbo Fu

Figure 1 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 2 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 3 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 4 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Viaarxiv icon

Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency

Add code
Bookmark button
Alert button
Jun 11, 2022
Jinyin Chen, Mingjun Li, Tao Liu, Haibin Zheng, Yao Cheng, Changting Lin

Figure 1 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 2 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 3 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 4 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Viaarxiv icon

GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning

Add code
Bookmark button
Alert button
Apr 05, 2022
Jinyin Chen, Shulong Hu, Haibin Zheng, Changyou Xing, Guomin Zhang

Figure 1 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 2 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 3 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 4 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Viaarxiv icon

DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity

Add code
Bookmark button
Alert button
Feb 12, 2022
Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Zhenguang Liu, Qi Xuan, Yue Yu, Yao Cheng

Figure 1 for DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity
Figure 2 for DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity
Figure 3 for DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity
Figure 4 for DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity
Viaarxiv icon