Picture for Zhen Xiang

Zhen Xiang

How Memory Management Impacts LLM Agents: An Empirical Study of Experience-Following Behavior

Add code
May 21, 2025
Viaarxiv icon

Doxing via the Lens: Revealing Privacy Leakage in Image Geolocation for Agentic Multi-Modal Large Reasoning Model

Add code
Apr 29, 2025
Viaarxiv icon

Large Language Model Empowered Privacy-Protected Framework for PHI Annotation in Clinical Notes

Add code
Apr 22, 2025
Viaarxiv icon

MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models

Add code
Mar 19, 2025
Viaarxiv icon

A Practical Memory Injection Attack against LLM Agents

Add code
Mar 05, 2025
Viaarxiv icon

Multi-Faceted Studies on Data Poisoning can Advance LLM Development

Add code
Feb 20, 2025
Viaarxiv icon

Unveiling Privacy Risks in LLM Agent Memory

Add code
Feb 17, 2025
Viaarxiv icon

SafeChain: Safety of Language Models with Long Chain-of-Thought Reasoning Capabilities

Add code
Feb 17, 2025
Viaarxiv icon

SafeAgentBench: A Benchmark for Safe Task Planning of Embodied LLM Agents

Add code
Dec 17, 2024
Viaarxiv icon

Data Free Backdoor Attacks

Add code
Dec 09, 2024
Viaarxiv icon