Picture for Jinyin Chen

Jinyin Chen

Senior Member, IEEE

Robust Knowledge Distillation Based on Feature Variance Against Backdoored Teacher Model

Add code
Jun 01, 2024
Viaarxiv icon

GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models

Feb 05, 2024
Figure 1 for GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
Figure 2 for GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
Figure 3 for GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
Figure 4 for GUARD: Role-playing to Generate Natural-language Jailbreakings to Test Guideline Adherence of Large Language Models
Viaarxiv icon

CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space

Jul 18, 2023
Figure 1 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 2 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 3 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Figure 4 for CertPri: Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
Viaarxiv icon

AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking

Add code
Mar 25, 2023
Figure 1 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 2 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 3 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Figure 4 for AdvCheck: Characterizing Adversarial Examples via Local Gradient Checking
Viaarxiv icon

Edge Deep Learning Model Protection via Neuron Authorization

Add code
Mar 23, 2023
Figure 1 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 2 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 3 for Edge Deep Learning Model Protection via Neuron Authorization
Figure 4 for Edge Deep Learning Model Protection via Neuron Authorization
Viaarxiv icon

FedRight: An Effective Model Copyright Protection for Federated Learning

Mar 18, 2023
Figure 1 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 2 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 3 for FedRight: An Effective Model Copyright Protection for Federated Learning
Figure 4 for FedRight: An Effective Model Copyright Protection for Federated Learning
Viaarxiv icon

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

Add code
Oct 25, 2022
Figure 1 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 2 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 3 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Figure 4 for Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs
Viaarxiv icon

Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection

Add code
Aug 14, 2022
Figure 1 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 2 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 3 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Figure 4 for Link-Backdoor: Backdoor Attack on Link Prediction via Node Injection
Viaarxiv icon

Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection

Add code
Jun 17, 2022
Figure 1 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 2 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 3 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Figure 4 for Is Multi-Modal Necessarily Better? Robustness Evaluation of Multi-modal Fake News Detection
Viaarxiv icon

Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency

Add code
Jun 11, 2022
Figure 1 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 2 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 3 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Figure 4 for Rethinking the Defense Against Free-rider Attack From the Perspective of Model Weight Evolving Frequency
Viaarxiv icon