Picture for Leo Yu Zhang

Leo Yu Zhang

Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems

Add code
Apr 03, 2026
Viaarxiv icon

Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

Add code
Apr 03, 2026
Viaarxiv icon

ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery

Add code
Mar 18, 2026
Viaarxiv icon

Malicious Agent Skills in the Wild: A Large-Scale Security Empirical Study

Add code
Feb 06, 2026
Viaarxiv icon

UnlearnShield: Shielding Forgotten Privacy against Unlearning Inversion

Add code
Jan 28, 2026
Viaarxiv icon

Erosion Attack for Adversarial Training to Enhance Semantic Segmentation Robustness

Add code
Jan 21, 2026
Viaarxiv icon

Beyond Denial-of-Service: The Puppeteer's Attack for Fine-Grained Control in Ranking-Based Federated Learning

Add code
Jan 21, 2026
Viaarxiv icon

Less Is More -- Until It Breaks: Security Pitfalls of Vision Token Compression in Large Vision-Language Models

Add code
Jan 17, 2026
Viaarxiv icon

Gradient Structure Estimation under Label-Only Oracles via Spectral Sensitivity

Add code
Jan 17, 2026
Viaarxiv icon

Dual-View Inference Attack: Machine Unlearning Amplifies Privacy Exposure

Add code
Dec 18, 2025
Viaarxiv icon