Picture for Yuekai Huang

Yuekai Huang

Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems

Add code
Jun 06, 2025
Figure 1 for Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Figure 2 for Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Figure 3 for Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Figure 4 for Joint-GCG: Unified Gradient-Based Poisoning Attacks on Retrieval-Augmented Generation Systems
Viaarxiv icon

One Shot Dominance: Knowledge Poisoning Attack on Retrieval-Augmented Generation Systems

Add code
May 15, 2025
Viaarxiv icon

Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System

Add code
Feb 17, 2025
Figure 1 for Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
Figure 2 for Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
Figure 3 for Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
Figure 4 for Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
Viaarxiv icon

What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context

Add code
Dec 17, 2024
Figure 1 for What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context
Figure 2 for What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context
Figure 3 for What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context
Figure 4 for What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context
Viaarxiv icon

From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection

Add code
Dec 13, 2024
Figure 1 for From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection
Figure 2 for From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection
Figure 3 for From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection
Figure 4 for From Allies to Adversaries: Manipulating LLM Tool-Calling through Adversarial Injection
Viaarxiv icon