Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

PIDP-Attack: Combining Prompt Injection with Database Poisoning Attacks on Retrieval-Augmented Generation Systems

Add code
Mar 26, 2026
Viaarxiv icon

DP^2-VL: Private Photo Dataset Protection by Data Poisoning for Vision-Language Models

Add code
Mar 25, 2026
Viaarxiv icon

AI Security in the Foundation Model Era: A Comprehensive Survey from a Unified Perspective

Add code
Mar 25, 2026
Viaarxiv icon

Towards Secure Retrieval-Augmented Generation: A Comprehensive Review of Threats, Defenses and Benchmarks

Add code
Mar 23, 2026
Viaarxiv icon

PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning

Add code
Mar 24, 2026
Viaarxiv icon

Detection of adversarial intent in Human-AI teams using LLMs

Add code
Mar 21, 2026
Viaarxiv icon

Graph-Aware Text-Only Backdoor Poisoning for Text-Attributed Graphs

Add code
Mar 20, 2026
Viaarxiv icon

Detecting Data Poisoning in Code Generation LLMs via Black-Box, Vulnerability-Oriented Scanning

Add code
Mar 17, 2026
Viaarxiv icon

ARMOR: Adaptive Resilience Against Model Poisoning Attacks in Continual Federated Learning for Mobile Indoor Localization

Add code
Mar 20, 2026
Viaarxiv icon

STEP: Detecting Audio Backdoor Attacks via Stability-based Trigger Exposure Profiling

Add code
Mar 18, 2026
Viaarxiv icon