Picture for Zongjie Li

Zongjie Li

Taxonomy, Evaluation and Exploitation of IPI-Centric LLM Agent Defense Frameworks

Add code
Nov 19, 2025
Viaarxiv icon

Disabling Self-Correction in Retrieval-Augmented Generation via Stealthy Retriever Poisoning

Add code
Aug 27, 2025
Viaarxiv icon

SoK: Evaluating Jailbreak Guardrails for Large Language Models

Add code
Jun 12, 2025
Figure 1 for SoK: Evaluating Jailbreak Guardrails for Large Language Models
Figure 2 for SoK: Evaluating Jailbreak Guardrails for Large Language Models
Figure 3 for SoK: Evaluating Jailbreak Guardrails for Large Language Models
Figure 4 for SoK: Evaluating Jailbreak Guardrails for Large Language Models
Viaarxiv icon

Reasoning as a Resource: Optimizing Fast and Slow Thinking in Code Generation Models

Add code
Jun 11, 2025
Viaarxiv icon

IP Leakage Attacks Targeting LLM-Based Multi-Agent Systems

Add code
May 18, 2025
Viaarxiv icon

NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization

Add code
May 17, 2025
Figure 1 for NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization
Figure 2 for NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization
Figure 3 for NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization
Figure 4 for NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization
Viaarxiv icon

GuidedBench: Equipping Jailbreak Evaluation with Guidelines

Add code
Feb 24, 2025
Figure 1 for GuidedBench: Equipping Jailbreak Evaluation with Guidelines
Figure 2 for GuidedBench: Equipping Jailbreak Evaluation with Guidelines
Figure 3 for GuidedBench: Equipping Jailbreak Evaluation with Guidelines
Figure 4 for GuidedBench: Equipping Jailbreak Evaluation with Guidelines
Viaarxiv icon

API-guided Dataset Synthesis to Finetune Large Code Models

Add code
Aug 15, 2024
Figure 1 for API-guided Dataset Synthesis to Finetune Large Code Models
Figure 2 for API-guided Dataset Synthesis to Finetune Large Code Models
Figure 3 for API-guided Dataset Synthesis to Finetune Large Code Models
Figure 4 for API-guided Dataset Synthesis to Finetune Large Code Models
Viaarxiv icon

SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner

Add code
Jun 08, 2024
Viaarxiv icon

Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs

Add code
Apr 27, 2024
Figure 1 for Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs
Figure 2 for Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs
Figure 3 for Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs
Figure 4 for Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs
Viaarxiv icon