Picture for Wanlei Zhou

Wanlei Zhou

Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method

Add code
Nov 15, 2025
Figure 1 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 2 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 3 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 4 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Viaarxiv icon

Graph Unlearning: Efficient Node Removal in Graph Neural Networks

Add code
Sep 05, 2025
Viaarxiv icon

Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs

Add code
Jun 13, 2025
Figure 1 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 2 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 3 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 4 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Viaarxiv icon

Chain-of-Lure: A Synthetic Narrative-Driven Approach to Compromise Large Language Models

Add code
May 23, 2025
Viaarxiv icon

Safe and Reliable Diffusion Models via Subspace Projection

Add code
Mar 21, 2025
Figure 1 for Safe and Reliable Diffusion Models via Subspace Projection
Figure 2 for Safe and Reliable Diffusion Models via Subspace Projection
Figure 3 for Safe and Reliable Diffusion Models via Subspace Projection
Figure 4 for Safe and Reliable Diffusion Models via Subspace Projection
Viaarxiv icon

Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers

Add code
Mar 11, 2025
Viaarxiv icon

Data Duplication: A Novel Multi-Purpose Attack Paradigm in Machine Unlearning

Add code
Jan 28, 2025
Viaarxiv icon

Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI

Add code
Jan 28, 2025
Figure 1 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 2 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 3 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Figure 4 for Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI
Viaarxiv icon

AFed: Algorithmic Fair Federated Learning

Add code
Jan 06, 2025
Viaarxiv icon

Vertical Federated Unlearning via Backdoor Certification

Add code
Dec 16, 2024
Viaarxiv icon