Picture for Jiawei Kong

Jiawei Kong

Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs

Add code
May 26, 2025
Viaarxiv icon

Wolf Hidden in Sheep's Conversations: Toward Harmless Data-Based Backdoor Attacks for Jailbreaking Large Language Models

Add code
May 23, 2025
Viaarxiv icon

Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors

Add code
May 21, 2025
Viaarxiv icon

Neural Antidote: Class-Wise Prompt Tuning for Purifying Backdoors in Pre-trained Vision-Language Models

Add code
Feb 26, 2025
Viaarxiv icon

CLIP-Guided Networks for Transferable Targeted Attacks

Add code
Jul 14, 2024
Viaarxiv icon

One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models

Add code
Jun 08, 2024
Figure 1 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 2 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 3 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Figure 4 for One Perturbation is Enough: On Generating Universal Adversarial Perturbations against Vision-Language Pre-training Models
Viaarxiv icon

Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses

Add code
Feb 06, 2024
Figure 1 for Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses
Figure 2 for Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses
Figure 3 for Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses
Figure 4 for Privacy Leakage on DNNs: A Survey of Model Inversion Attacks and Defenses
Viaarxiv icon