Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation

Add code
Apr 09, 2025
Viaarxiv icon

OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning

Add code
Apr 22, 2025
Viaarxiv icon

Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets

Add code
Apr 16, 2025
Viaarxiv icon

Data Poisoning in Deep Learning: A Survey

Add code
Mar 27, 2025
Viaarxiv icon

Sky of Unlearning (SoUL): Rewiring Federated Machine Unlearning via Selective Pruning

Add code
Apr 02, 2025
Viaarxiv icon

Exploiting Meta-Learning-based Poisoning Attacks for Graph Link Prediction

Add code
Apr 08, 2025
Viaarxiv icon

Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing

Add code
Mar 27, 2025
Viaarxiv icon

Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks

Add code
Mar 30, 2025
Viaarxiv icon

Propaganda via AI? A Study on Semantic Backdoors in Large Language Models

Add code
Apr 15, 2025
Viaarxiv icon

WeiDetect: Weibull Distribution-Based Defense against Poisoning Attacks in Federated Learning for Network Intrusion Detection Systems

Add code
Apr 06, 2025
Viaarxiv icon