Alert button
Picture for Jindong Gu

Jindong Gu

Alert button

Latent Guard: a Safety Framework for Text-to-image Generation

Add code
Bookmark button
Alert button
Apr 11, 2024
Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, Fabio Pizzati

Viaarxiv icon

Responsible Generative AI: What to Generate and What Not

Add code
Bookmark button
Alert button
Apr 08, 2024
Jindong Gu

Viaarxiv icon

Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?

Add code
Bookmark button
Alert button
Apr 04, 2024
Shuo Chen, Zhen Han, Bailan He, Zifeng Ding, Wenqian Yu, Philip Torr, Volker Tresp, Jindong Gu

Viaarxiv icon

Model-agnostic Origin Attribution of Generated Images with Few-shot Examples

Add code
Bookmark button
Alert button
Apr 03, 2024
Fengyuan Liu, Haochen Luo, Yiming Li, Philip Torr, Jindong Gu

Viaarxiv icon

As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?

Add code
Bookmark button
Alert button
Mar 19, 2024
Anjun Hu, Jindong Gu, Francesco Pinto, Konstantinos Kamnitsas, Philip Torr

Figure 1 for As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Figure 2 for As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Figure 3 for As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Figure 4 for As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?
Viaarxiv icon

An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models

Add code
Bookmark button
Alert button
Mar 14, 2024
Haochen Luo, Jindong Gu, Fengyuan Liu, Philip Torr

Figure 1 for An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Figure 2 for An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Figure 3 for An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Figure 4 for An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language Models
Viaarxiv icon

Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds

Add code
Bookmark button
Alert button
Mar 08, 2024
Tianrui Lou, Xiaojun Jia, Jindong Gu, Li Liu, Siyuan Liang, Bangyan He, Xiaochun Cao

Figure 1 for Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Figure 2 for Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Figure 3 for Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Figure 4 for Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds
Viaarxiv icon

Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning Meets Adversarial Images

Add code
Bookmark button
Alert button
Feb 22, 2024
Zefeng Wang, Zhen Han, Shuo Chen, Fan Xue, Zifeng Ding, Xun Xiao, Volker Tresp, Philip Torr, Jindong Gu

Viaarxiv icon

Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images

Add code
Bookmark button
Alert button
Jan 20, 2024
Kuofeng Gao, Yang Bai, Jindong Gu, Shu-Tao Xia, Philip Torr, Zhifeng Li, Wei Liu

Viaarxiv icon

Does Few-shot Learning Suffer from Backdoor Attacks?

Add code
Bookmark button
Alert button
Dec 31, 2023
Xinwei Liu, Xiaojun Jia, Jindong Gu, Yuan Xun, Siyuan Liang, Xiaochun Cao

Viaarxiv icon