Picture for Gelei Deng

Gelei Deng

Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems

Add code
Apr 03, 2026
Viaarxiv icon

Credential Leakage in LLM Agent Skills: A Large-Scale Empirical Study

Add code
Apr 03, 2026
Viaarxiv icon

AutoEG: Exploiting Known Third-Party Vulnerabilities in Black-Box Web Applications

Add code
Apr 01, 2026
Viaarxiv icon

Mind Your HEARTBEAT! Claw Background Execution Inherently Enables Silent Memory Pollution

Add code
Mar 25, 2026
Viaarxiv icon

"Are You Sure?": An Empirical Study of Human Perception Vulnerability in LLM-Driven Agentic Systems

Add code
Feb 24, 2026
Viaarxiv icon

Do LLMs and VLMs Share Neurons for Inference? Evidence and Mechanisms of Cross-Modal Transfer

Add code
Feb 22, 2026
Viaarxiv icon

Malicious Agent Skills in the Wild: A Large-Scale Security Empirical Study

Add code
Feb 06, 2026
Viaarxiv icon

Risky-Bench: Probing Agentic Safety Risks under Real-World Deployment

Add code
Feb 03, 2026
Viaarxiv icon

Self-Guard: Defending Large Reasoning Models via enhanced self-reflection

Add code
Jan 31, 2026
Viaarxiv icon

DECEIVE-AFC: Adversarial Claim Attacks against Search-Enabled LLM-based Fact-Checking Systems

Add code
Jan 31, 2026
Viaarxiv icon