Picture for Wanlei Zhou

Wanlei Zhou

Osmosis Distillation: Model Hijacking with the Fewest Samples

Add code
Mar 05, 2026
Viaarxiv icon

From Spark to Fire: Modeling and Mitigating Error Cascades in LLM-Based Multi-Agent Collaboration

Add code
Mar 04, 2026
Viaarxiv icon

Turning Black Box into White Box: Dataset Distillation Leaks

Add code
Mar 01, 2026
Viaarxiv icon

Hide&Seek: Remove Image Watermarks with Negligible Cost via Pixel-wise Reconstruction

Add code
Mar 01, 2026
Viaarxiv icon

Guided Collaboration in Heterogeneous LLM-Based Multi-Agent Systems via Entropy-Based Understanding Assessment and Experience Retrieval

Add code
Feb 14, 2026
Viaarxiv icon

Forgetting Similar Samples: Can Machine Unlearning Do it Better?

Add code
Jan 11, 2026
Viaarxiv icon

Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method

Add code
Nov 15, 2025
Figure 1 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 2 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 3 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Figure 4 for Rethinking Bias in Generative Data Augmentation for Medical AI: a Frequency Recalibration Method
Viaarxiv icon

Graph Unlearning: Efficient Node Removal in Graph Neural Networks

Add code
Sep 05, 2025
Viaarxiv icon

Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs

Add code
Jun 13, 2025
Figure 1 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 2 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 3 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Figure 4 for Bias Amplification in RAG: Poisoning Knowledge Retrieval to Steer LLMs
Viaarxiv icon

Chain-of-Lure: A Synthetic Narrative-Driven Approach to Compromise Large Language Models

Add code
May 23, 2025
Viaarxiv icon