Data Poisoning


Data poisoning is the process of manipulating training data to compromise the performance of machine learning models.

A Geometric Approach to Problems in Optimization and Data Science

Add code
Apr 22, 2025
Viaarxiv icon

The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes

Add code
Apr 24, 2025
Viaarxiv icon

A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning

Add code
Apr 21, 2025
Viaarxiv icon

OPUS-VFL: Incentivizing Optimal Privacy-Utility Tradeoffs in Vertical Federated Learning

Add code
Apr 22, 2025
Viaarxiv icon

Investigating cybersecurity incidents using large language models in latest-generation wireless networks

Add code
Apr 14, 2025
Viaarxiv icon

ControlNET: A Firewall for RAG-based LLM System

Add code
Apr 13, 2025
Viaarxiv icon

Exploring Backdoor Attack and Defense for LLM-empowered Recommendations

Add code
Apr 15, 2025
Viaarxiv icon

Detecting Instruction Fine-tuning Attack on Language Models with Influence Function

Add code
Apr 12, 2025
Viaarxiv icon

Secure Transfer Learning: Training Clean Models Against Backdoor in (Both) Pre-trained Encoders and Downstream Datasets

Add code
Apr 16, 2025
Viaarxiv icon

Diversity-aware Dual-promotion Poisoning Attack on Sequential Recommendation

Add code
Apr 09, 2025
Viaarxiv icon