Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

Inference-Time Backdoors via Hidden Instructions in LLM Chat Templates

Add code
Feb 05, 2026
Viaarxiv icon

Agent2Agent Threats in Safety-Critical LLM Assistants: A Human-Centric Taxonomy

Add code
Feb 05, 2026
Viaarxiv icon

Phantom Transfer: Data-level Defences are Insufficient Against Data Poisoning

Add code
Feb 03, 2026
Viaarxiv icon

Comparative Insights on Adversarial Machine Learning from Industry and Academia: A User-Study Approach

Add code
Feb 04, 2026
Viaarxiv icon

The Trigger in the Haystack: Extracting and Reconstructing LLM Backdoor Triggers

Add code
Feb 03, 2026
Viaarxiv icon

When Attention Betrays: Erasing Backdoor Attacks in Robotic Policies by Reconstructing Visual Tokens

Add code
Feb 03, 2026
Viaarxiv icon

Human Society-Inspired Approaches to Agentic AI Security: The 4C Framework

Add code
Feb 02, 2026
Viaarxiv icon

Trustworthy Blockchain-based Federated Learning for Electronic Health Records: Securing Participant Identity with Decentralized Identifiers and Verifiable Credentials

Add code
Feb 02, 2026
Viaarxiv icon

Safety-Efficacy Trade Off: Robustness against Data-Poisoning

Add code
Jan 31, 2026
Viaarxiv icon

TinyGuard:A lightweight Byzantine Defense for Resource-Constrained Federated Learning via Statistical Update Fingerprints

Add code
Feb 02, 2026
Viaarxiv icon