Alert button
Picture for Jinghui Chen

Jinghui Chen

Alert button

VQAttack: Transferable Adversarial Attacks on Visual Question Answering via Pre-trained Models

Add code
Bookmark button
Alert button
Feb 16, 2024
Ziyi Yin, Muchao Ye, Tianrong Zhang, Jiaqi Wang, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

Viaarxiv icon

Federated Learning with Projected Trajectory Regularization

Add code
Bookmark button
Alert button
Dec 22, 2023
Tiejin Chen, Yuanpu Cao, Yujia Wang, Cho-Jui Hsieh, Jinghui Chen

Viaarxiv icon

On the Difficulty of Defending Contrastive Learning against Backdoor Attacks

Add code
Bookmark button
Alert button
Dec 14, 2023
Changjiang Li, Ren Pang, Bochuan Cao, Zhaohan Xi, Jinghui Chen, Shouling Ji, Ting Wang

Viaarxiv icon

Stealthy and Persistent Unalignment on Large Language Models via Backdoor Injections

Add code
Bookmark button
Alert button
Nov 15, 2023
Yuanpu Cao, Bochuan Cao, Jinghui Chen

Viaarxiv icon

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

Add code
Bookmark button
Alert button
Oct 30, 2023
Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, Jinghui Chen

Viaarxiv icon

VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models

Add code
Bookmark button
Alert button
Oct 07, 2023
Ziyi Yin, Muchao Ye, Tianrong Zhang, Tianyu Du, Jinguo Zhu, Han Liu, Jinghui Chen, Ting Wang, Fenglong Ma

Figure 1 for VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Figure 2 for VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Figure 3 for VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Figure 4 for VLAttack: Multimodal Adversarial Attacks on Vision-Language Tasks via Pre-trained Models
Viaarxiv icon

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

Add code
Bookmark button
Alert button
Oct 02, 2023
Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, Dinghao Wu

Viaarxiv icon

Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks

Add code
Bookmark button
Alert button
Sep 23, 2023
Zhaohan Xi, Tianyu Du, Changjiang Li, Ren Pang, Shouling Ji, Jinghui Chen, Fenglong Ma, Ting Wang

Figure 1 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 2 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 3 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Figure 4 for Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks
Viaarxiv icon

Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM

Add code
Bookmark button
Alert button
Sep 18, 2023
Bochuan Cao, Yuanpu Cao, Lu Lin, Jinghui Chen

Figure 1 for Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Figure 2 for Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Figure 3 for Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Figure 4 for Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM
Viaarxiv icon

On the Vulnerability of Backdoor Defenses for Federated Learning

Add code
Bookmark button
Alert button
Jan 19, 2023
Pei Fang, Jinghui Chen

Figure 1 for On the Vulnerability of Backdoor Defenses for Federated Learning
Figure 2 for On the Vulnerability of Backdoor Defenses for Federated Learning
Figure 3 for On the Vulnerability of Backdoor Defenses for Federated Learning
Figure 4 for On the Vulnerability of Backdoor Defenses for Federated Learning
Viaarxiv icon