Picture for Zi Liang

Zi Liang

From Domains to Instances: Dual-Granularity Data Synthesis for LLM Unlearning

Add code
Jan 07, 2026
Viaarxiv icon

Class-feature Watermark: A Resilient Black-box Watermark Against Model Extraction Attacks

Add code
Nov 16, 2025
Viaarxiv icon

Reminiscence Attack on Residuals: Exploiting Approximate Machine Unlearning for Privacy

Add code
Jul 28, 2025
Viaarxiv icon

United Minds or Isolated Agents? Exploring Coordination of LLMs under Cognitive Load Theory

Add code
Jun 07, 2025
Viaarxiv icon

Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?

Add code
May 19, 2025
Viaarxiv icon

How Vital is the Jurisprudential Relevance: Law Article Intervened Legal Case Retrieval and Matching

Add code
Feb 25, 2025
Figure 1 for How Vital is the Jurisprudential Relevance: Law Article Intervened Legal Case Retrieval and Matching
Figure 2 for How Vital is the Jurisprudential Relevance: Law Article Intervened Legal Case Retrieval and Matching
Figure 3 for How Vital is the Jurisprudential Relevance: Law Article Intervened Legal Case Retrieval and Matching
Figure 4 for How Vital is the Jurisprudential Relevance: Law Article Intervened Legal Case Retrieval and Matching
Viaarxiv icon

New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes

Add code
Oct 16, 2024
Figure 1 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 2 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 3 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Figure 4 for New Paradigm of Adversarial Training: Breaking Inherent Trade-Off between Accuracy and Robustness via Dummy Classes
Viaarxiv icon

Alignment-Aware Model Extraction Attacks on Large Language Models

Add code
Sep 04, 2024
Figure 1 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 2 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 3 for Alignment-Aware Model Extraction Attacks on Large Language Models
Figure 4 for Alignment-Aware Model Extraction Attacks on Large Language Models
Viaarxiv icon

Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models

Add code
Aug 05, 2024
Figure 1 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 2 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 3 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Figure 4 for Why Are My Prompts Leaked? Unraveling Prompt Extraction Threats in Customized Large Language Models
Viaarxiv icon

MERGE: Fast Private Text Generation

Add code
May 25, 2023
Viaarxiv icon