Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

Machine Unlearning of Traffic State Estimation and Prediction

Add code
Jul 23, 2025
Viaarxiv icon

Fake or Real: The Impostor Hunt in Texts for Space Operations

Add code
Jul 17, 2025
Viaarxiv icon

Self-Adaptive and Robust Federated Spectrum Sensing without Benign Majority for Cellular Networks

Add code
Jul 16, 2025
Viaarxiv icon

A Bayesian Incentive Mechanism for Poison-Resilient Federated Learning

Add code
Jul 16, 2025
Viaarxiv icon

VisualTrap: A Stealthy Backdoor Attack on GUI Agents via Visual Grounding Manipulation

Add code
Jul 09, 2025
Viaarxiv icon

LLM Hypnosis: Exploiting User Feedback for Unauthorized Knowledge Injection to All Users

Add code
Jul 03, 2025
Viaarxiv icon

Tuning without Peeking: Provable Privacy and Generalization Bounds for LLM Post-Training

Add code
Jul 02, 2025
Viaarxiv icon

Winter Soldier: Backdooring Language Models at Pre-Training with Indirect Data Poisoning

Add code
Jun 17, 2025
Viaarxiv icon

Devil's Hand: Data Poisoning Attacks to Locally Private Graph Learning Protocols

Add code
Jun 11, 2025
Viaarxiv icon

Collapsing Sequence-Level Data-Policy Coverage via Poisoning Attack in Offline Reinforcement Learning

Add code
Jun 12, 2025
Viaarxiv icon