Picture for Changjiang Li

Changjiang Li

VModA: An Effective Framework for Adaptive NSFW Image Moderation

Add code
May 29, 2025
Viaarxiv icon

On the Security Risks of ML-based Malware Detection Systems: A Survey

Add code
May 16, 2025
Viaarxiv icon

RAPID: Retrieval Augmented Training of Differentially Private Diffusion Models

Add code
Feb 18, 2025
Viaarxiv icon

GraphRAG under Fire

Add code
Jan 23, 2025
Viaarxiv icon

CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models

Add code
Nov 20, 2024
Figure 1 for CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
Figure 2 for CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
Figure 3 for CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
Figure 4 for CopyrightMeter: Revisiting Copyright Protection in Text-to-image Models
Viaarxiv icon

RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction

Add code
Oct 25, 2024
Figure 1 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 2 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 3 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Figure 4 for RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction
Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Dec 14, 2023
Figure 1 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 2 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 3 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Figure 4 for On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Viaarxiv icon

Model Extraction Attacks Revisited

Add code
Dec 08, 2023
Viaarxiv icon

Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention

Add code
Nov 30, 2023
Figure 1 for Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Figure 2 for Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Figure 3 for Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Figure 4 for Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention
Viaarxiv icon

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

Add code
Oct 30, 2023
Figure 1 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 2 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 3 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Figure 4 for IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Viaarxiv icon