Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

LoRA as Oracle

Add code
Jan 16, 2026
Viaarxiv icon

Topology-Independent Robustness of the Weighted Mean under Label Poisoning Attacks in Heterogeneous Decentralized Learning

Add code
Jan 06, 2026
Viaarxiv icon

The Promptware Kill Chain: How Prompt Injections Gradually Evolved Into a Multi-Step Malware

Add code
Jan 14, 2026
Viaarxiv icon

CS-GBA: A Critical Sample-based Gradient-guided Backdoor Attack for Offline Reinforcement Learning

Add code
Jan 15, 2026
Viaarxiv icon

Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

Add code
Jan 07, 2026
Viaarxiv icon

Multi-Agent Framework for Threat Mitigation and Resilience in AI-Based Systems

Add code
Dec 29, 2025
Viaarxiv icon

Robustness Certificates for Neural Networks against Adversarial Attacks

Add code
Dec 24, 2025
Figure 1 for Robustness Certificates for Neural Networks against Adversarial Attacks
Figure 2 for Robustness Certificates for Neural Networks against Adversarial Attacks
Figure 3 for Robustness Certificates for Neural Networks against Adversarial Attacks
Figure 4 for Robustness Certificates for Neural Networks against Adversarial Attacks
Viaarxiv icon

State Backdoor: Towards Stealthy Real-world Poisoning Attack on Vision-Language-Action Model in State Space

Add code
Jan 07, 2026
Viaarxiv icon

GShield: Mitigating Poisoning Attacks in Federated Learning

Add code
Dec 22, 2025
Viaarxiv icon

Byzantine-Robust Federated Learning Framework with Post-Quantum Secure Aggregation for Real-Time Threat Intelligence Sharing in Critical IoT Infrastructure

Add code
Jan 03, 2026
Viaarxiv icon