Picture for Shaokui Wei

Shaokui Wei

Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness

Add code
May 30, 2024
Viaarxiv icon

Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor

Add code
May 25, 2024
Viaarxiv icon

BackdoorBench: A Comprehensive Benchmark and Analysis of Backdoor Learning

Add code
Jan 26, 2024
Viaarxiv icon

Defenses in Adversarial Machine Learning: A Survey

Add code
Dec 13, 2023
Figure 1 for Defenses in Adversarial Machine Learning: A Survey
Figure 2 for Defenses in Adversarial Machine Learning: A Survey
Figure 3 for Defenses in Adversarial Machine Learning: A Survey
Figure 4 for Defenses in Adversarial Machine Learning: A Survey
Viaarxiv icon

VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency

Add code
Sep 28, 2023
Figure 1 for VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency
Figure 2 for VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency
Figure 3 for VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency
Figure 4 for VDC: Versatile Data Cleanser for Detecting Dirty Samples via Visual-Linguistic Inconsistency
Viaarxiv icon

Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples

Add code
Jul 20, 2023
Figure 1 for Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Figure 2 for Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Figure 3 for Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Figure 4 for Shared Adversarial Unlearning: Backdoor Mitigation by Unlearning Shared Adversarial Examples
Viaarxiv icon

Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy

Add code
Jul 14, 2023
Figure 1 for Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Figure 2 for Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Figure 3 for Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Figure 4 for Boosting Backdoor Attack with A Learnable Poisoning Sample Selection Strategy
Viaarxiv icon

Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features

Add code
Jun 29, 2023
Figure 1 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 2 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 3 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Figure 4 for Neural Polarizer: A Lightweight and Effective Backdoor Defense via Purifying Poisoned Features
Viaarxiv icon

Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization

Add code
Apr 24, 2023
Figure 1 for Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Figure 2 for Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Figure 3 for Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Figure 4 for Enhancing Fine-Tuning Based Backdoor Defense with Sharpness-Aware Minimization
Viaarxiv icon

Mean Parity Fair Regression in RKHS

Add code
Feb 21, 2023
Figure 1 for Mean Parity Fair Regression in RKHS
Figure 2 for Mean Parity Fair Regression in RKHS
Figure 3 for Mean Parity Fair Regression in RKHS
Figure 4 for Mean Parity Fair Regression in RKHS
Viaarxiv icon