Picture for Haibin Zheng

Haibin Zheng

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

Add code
Jun 17, 2022
Figure 1 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 2 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 3 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 4 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Viaarxiv icon

Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency

Add code
Jun 11, 2022
Figure 1 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 2 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 3 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 4 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Viaarxiv icon

GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning

Add code
Apr 05, 2022
Figure 1 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 2 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 3 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Figure 4 for GAIL-PT: A Generic Intelligent Penetration Testing Framework with Generative Adversarial Imitation Learning
Viaarxiv icon

DeepSensor: Deep Learning Testing Framework Based on Neuron Sensitivity

Add code
Feb 12, 2022
Viaarxiv icon

NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification

Add code
Dec 25, 2021
Figure 1 for NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Figure 2 for NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Figure 3 for NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Figure 4 for NeuronFair: Interpretable White-Box Fairness Testing through Biased Neuron Identification
Viaarxiv icon

NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks

Add code
Dec 24, 2021
Figure 1 for NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks
Figure 2 for NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks
Figure 3 for NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks
Figure 4 for NIP: Neuron-level Inverse Perturbation Against Adversarial Attacks
Viaarxiv icon

Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction

Add code
Oct 08, 2021
Figure 1 for Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Figure 2 for Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Figure 3 for Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Figure 4 for Dyn-Backdoor: Backdoor Attack on Dynamic Link Prediction
Viaarxiv icon

Salient Feature Extractor for Adversarial Defense on Deep Neural Networks

Add code
May 14, 2021
Figure 1 for Salient Feature Extractor for Adversarial Defense on Deep Neural Networks
Figure 2 for Salient Feature Extractor for Adversarial Defense on Deep Neural Networks
Figure 3 for Salient Feature Extractor for Adversarial Defense on Deep Neural Networks
Figure 4 for Salient Feature Extractor for Adversarial Defense on Deep Neural Networks
Viaarxiv icon

DeepPoison: Feature Transfer Based Stealthy Poisoning Attack

Add code
Jan 08, 2021
Figure 1 for DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Figure 2 for DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Figure 3 for DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Figure 4 for DeepPoison: Feature Transfer Based Stealthy Poisoning Attack
Viaarxiv icon

ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries

Add code
Dec 18, 2020
Figure 1 for ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Figure 2 for ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Figure 3 for ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Figure 4 for ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Viaarxiv icon