Alert button
Picture for Yinpeng Dong

Yinpeng Dong

Alert button

Evil Geniuses: Delving into the Safety of LLM-based Agents

Nov 20, 2023
Yu Tian, Xiao Yang, Jingyuan Zhang, Yinpeng Dong, Hang Su

The rapid advancements in large language models (LLMs) have led to a resurgence in LLM-based agents, which demonstrate impressive human-like behaviors and cooperative capabilities in various interactions and strategy formulations. However, evaluating the safety of LLM-based agents remains a complex challenge. This paper elaborately conducts a series of manual jailbreak prompts along with a virtual chat-powered evil plan development team, dubbed Evil Geniuses, to thoroughly probe the safety aspects of these agents. Our investigation reveals three notable phenomena: 1) LLM-based agents exhibit reduced robustness against malicious attacks. 2) the attacked agents could provide more nuanced responses. 3) the detection of the produced improper responses is more challenging. These insights prompt us to question the effectiveness of LLM-based attacks on agents, highlighting vulnerabilities at various levels and within different role specializations within the system/agent of LLM-based agents. Extensive evaluation and discussion reveal that LLM-based agents face significant challenges in safety and yield insights for future research. Our code is available at https://github.com/T1aNS1R/Evil-Geniuses.

* 13 pages 
Viaarxiv icon

How Robust is Google's Bard to Adversarial Image Attacks?

Sep 21, 2023
Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu

Figure 1 for How Robust is Google's Bard to Adversarial Image Attacks?
Figure 2 for How Robust is Google's Bard to Adversarial Image Attacks?
Figure 3 for How Robust is Google's Bard to Adversarial Image Attacks?
Figure 4 for How Robust is Google's Bard to Adversarial Image Attacks?

Multimodal Large Language Models (MLLMs) that integrate text and other modalities (especially vision) have achieved unprecedented performance in various multimodal tasks. However, due to the unsolved adversarial robustness problem of vision models, MLLMs can have more severe safety and security risks by introducing the vision inputs. In this work, we study the adversarial robustness of Google's Bard, a competitive chatbot to ChatGPT that released its multimodal capability recently, to better understand the vulnerabilities of commercial MLLMs. By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability. We show that the adversarial examples can also attack other MLLMs, e.g., a 26% attack success rate against Bing Chat and a 86% attack success rate against ERNIE bot. Moreover, we identify two defense mechanisms of Bard, including face detection and toxicity detection of images. We design corresponding attacks to evade these defenses, demonstrating that the current defenses of Bard are also vulnerable. We hope this work can deepen our understanding on the robustness of MLLMs and facilitate future research on defenses. Our code is available at https://github.com/thu-ml/Attack-Bard.

* Technical report 
Viaarxiv icon

Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models

Sep 05, 2023
Haixu Song, Shiyu Huang, Yinpeng Dong, Wei-Wei Tu

Figure 1 for Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models
Figure 2 for Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models
Figure 3 for Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models
Figure 4 for Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models

The rise of deepfake images, especially of well-known personalities, poses a serious threat to the dissemination of authentic information. To tackle this, we present a thorough investigation into how deepfakes are produced and how they can be identified. The cornerstone of our research is a rich collection of artificial celebrity faces, titled DeepFakeFace (DFF). We crafted the DFF dataset using advanced diffusion models and have shared it with the community through online platforms. This data serves as a robust foundation to train and test algorithms designed to spot deepfakes. We carried out a thorough review of the DFF dataset and suggest two evaluation methods to gauge the strength and adaptability of deepfake recognition tools. The first method tests whether an algorithm trained on one type of fake images can recognize those produced by other methods. The second evaluates the algorithm's performance with imperfect images, like those that are blurry, of low quality, or compressed. Given varied results across deepfake methods and image changes, our findings stress the need for better deepfake detectors. Our DFF dataset and tests aim to boost the development of more effective tools against deepfakes.

* 8 pages, 5 figures 
Viaarxiv icon

Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos

Aug 19, 2023
Yikai Wang, Yinpeng Dong, Fuchun Sun, Xiao Yang

Figure 1 for Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos
Figure 2 for Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos
Figure 3 for Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos
Figure 4 for Root Pose Decomposition Towards Generic Non-rigid 3D Reconstruction with Monocular Videos

This work focuses on the 3D reconstruction of non-rigid objects based on monocular RGB video sequences. Concretely, we aim at building high-fidelity models for generic object categories and casually captured scenes. To this end, we do not assume known root poses of objects, and do not utilize category-specific templates or dense pose priors. The key idea of our method, Root Pose Decomposition (RPD), is to maintain a per-frame root pose transformation, meanwhile building a dense field with local transformations to rectify the root pose. The optimization of local transformations is performed by point registration to the canonical space. We also adapt RPD to multi-object scenarios with object occlusions and individual differences. As a result, RPD allows non-rigid 3D reconstruction for complicated scenarios containing objects with large deformations, complex motion patterns, occlusions, and scale diversities of different individuals. Such a pipeline potentially scales to diverse sets of objects in the wild. We experimentally show that RPD surpasses state-of-the-art methods on the challenging DAVIS, OVIS, and AMA datasets.

* ICCV 2023. Project Page: https://rpd-share.github.io 
Viaarxiv icon

Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

Jul 21, 2023
Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Figure 1 for Improving Viewpoint Robustness for Visual Recognition via Adversarial Training

Viewpoint invariance remains challenging for visual recognition in the 3D world, as altering the viewing directions can significantly impact predictions for the same object. While substantial efforts have been dedicated to making neural networks invariant to 2D image translations and rotations, viewpoint invariance is rarely investigated. Motivated by the success of adversarial training in enhancing model robustness, we propose Viewpoint-Invariant Adversarial Training (VIAT) to improve the viewpoint robustness of image classifiers. Regarding viewpoint transformation as an attack, we formulate VIAT as a minimax optimization problem, where the inner maximization characterizes diverse adversarial viewpoints by learning a Gaussian mixture distribution based on the proposed attack method GMVFool. The outer minimization obtains a viewpoint-invariant classifier by minimizing the expected loss over the worst-case viewpoint distributions that can share the same one for different objects within the same category. Based on GMVFool, we contribute a large-scale dataset called ImageNet-V+ to benchmark viewpoint robustness. Experimental results show that VIAT significantly improves the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool. Furthermore, we propose ViewRS, a certified viewpoint robustness method that provides a certified radius and accuracy to demonstrate the effectiveness of VIAT from the theoretical perspective.

* 14 pages, 12 figures. arXiv admin note: substantial text overlap with arXiv:2307.10235 
Viaarxiv icon

Towards Viewpoint-Invariant Visual Recognition via Adversarial Training

Jul 16, 2023
Shouwei Ruan, Yinpeng Dong, Hang Su, Jianteng Peng, Ning Chen, Xingxing Wei

Figure 1 for Towards Viewpoint-Invariant Visual Recognition via Adversarial Training
Figure 2 for Towards Viewpoint-Invariant Visual Recognition via Adversarial Training
Figure 3 for Towards Viewpoint-Invariant Visual Recognition via Adversarial Training
Figure 4 for Towards Viewpoint-Invariant Visual Recognition via Adversarial Training

Visual recognition models are not invariant to viewpoint changes in the 3D world, as different viewing directions can dramatically affect the predictions given the same object. Although many efforts have been devoted to making neural networks invariant to 2D image translations and rotations, viewpoint invariance is rarely investigated. As most models process images in the perspective view, it is challenging to impose invariance to 3D viewpoint changes based only on 2D inputs. Motivated by the success of adversarial training in promoting model robustness, we propose Viewpoint-Invariant Adversarial Training (VIAT) to improve viewpoint robustness of common image classifiers. By regarding viewpoint transformation as an attack, VIAT is formulated as a minimax optimization problem, where the inner maximization characterizes diverse adversarial viewpoints by learning a Gaussian mixture distribution based on a new attack GMVFool, while the outer minimization trains a viewpoint-invariant classifier by minimizing the expected loss over the worst-case adversarial viewpoint distributions. To further improve the generalization performance, a distribution sharing strategy is introduced leveraging the transferability of adversarial viewpoints across objects. Experiments validate the effectiveness of VIAT in improving the viewpoint robustness of various image classifiers based on the diversity of adversarial viewpoints generated by GMVFool.

* Accepted by ICCV 2023 
Viaarxiv icon

Distributional Modeling for Location-Aware Adversarial Patches

Jun 28, 2023
Xingxing Wei, Shouwei Ruan, Yinpeng Dong, Hang Su

Figure 1 for Distributional Modeling for Location-Aware Adversarial Patches
Figure 2 for Distributional Modeling for Location-Aware Adversarial Patches
Figure 3 for Distributional Modeling for Location-Aware Adversarial Patches
Figure 4 for Distributional Modeling for Location-Aware Adversarial Patches

Adversarial patch is one of the important forms of performing adversarial attacks in the physical world. To improve the naturalness and aggressiveness of existing adversarial patches, location-aware patches are proposed, where the patch's location on the target object is integrated into the optimization process to perform attacks. Although it is effective, efficiently finding the optimal location for placing the patches is challenging, especially under the black-box attack settings. In this paper, we propose the Distribution-Optimized Adversarial Patch (DOPatch), a novel method that optimizes a multimodal distribution of adversarial locations instead of individual ones. DOPatch has several benefits: Firstly, we find that the locations' distributions across different models are pretty similar, and thus we can achieve efficient query-based attacks to unseen models using a distributional prior optimized on a surrogate model. Secondly, DOPatch can generate diverse adversarial samples by characterizing the distribution of adversarial locations. Thus we can improve the model's robustness to location-aware patches via carefully designed Distributional-Modeling Adversarial Training (DOP-DMAT). We evaluate DOPatch on various face recognition and image recognition tasks and demonstrate its superiority and efficiency over existing methods. We also conduct extensive ablation studies and analyses to validate the effectiveness of our method and provide insights into the distribution of adversarial locations.

Viaarxiv icon

Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

Jun 16, 2023
Hongcheng Gao, Hao Zhang, Yinpeng Dong, Zhijie Deng

Figure 1 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 2 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 3 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Figure 4 for Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks

Text-to-image (T2I) diffusion models (DMs) have shown promise in generating high-quality images from textual descriptions. The real-world applications of these models require particular attention to their safety and fidelity, but this has not been sufficiently explored. One fundamental question is whether existing T2I DMs are robust against variations over input texts. To answer it, this work provides the first robustness evaluation of T2I DMs against real-world attacks. Unlike prior studies that focus on malicious attacks involving apocryphal alterations to the input texts, we consider an attack space spanned by realistic errors (e.g., typo, glyph, phonetic) that humans can make, to ensure semantic consistency. Given the inherent randomness of the generation process, we develop novel distribution-based attack objectives to mislead T2I DMs. We perform attacks in a black-box manner without any knowledge of the model. Extensive experiments demonstrate the effectiveness of our method for attacking popular T2I DMs and simultaneously reveal their non-trivial robustness issues. Moreover, we provide an in-depth analysis of our method to show that it is not designed to attack the text encoder in T2I DMs solely.

Viaarxiv icon

DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World

Jun 15, 2023
Caixin Kang, Yinpeng Dong, Zhengyi Wang, Shouwei Ruan, Hang Su, Xingxing Wei

Figure 1 for DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World
Figure 2 for DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World
Figure 3 for DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World
Figure 4 for DIFFender: Diffusion-Based Adversarial Defense against Patch Attacks in the Physical World

Adversarial attacks in the physical world, particularly patch attacks, pose significant threats to the robustness and reliability of deep learning models. Developing reliable defenses against patch attacks is crucial for real-world applications, yet current research in this area is severely lacking. In this paper, we propose DIFFender, a novel defense method that leverages the pre-trained diffusion model to perform both localization and defense against potential adversarial patch attacks. DIFFender is designed as a pipeline consisting of two main stages: patch localization and restoration. In the localization stage, we exploit the intriguing properties of a diffusion model to effectively identify the locations of adversarial patches. In the restoration stage, we employ a text-guided diffusion model to eliminate adversarial regions in the image while preserving the integrity of the visual content. Additionally, we design a few-shot prompt-tuning algorithm to facilitate simple and efficient tuning, enabling the learned representations to easily transfer to downstream tasks, which optimize two stages jointly. We conduct extensive experiments on image classification and face recognition to demonstrate that DIFFender exhibits superior robustness under strong adaptive attacks and generalizes well across various scenarios, diverse classifiers, and multiple attack methods.

Viaarxiv icon

Robust Classification via a Single Diffusion Model

May 24, 2023
Huanran Chen, Yinpeng Dong, Zhengyi Wang, Xiao Yang, Chengqi Duan, Hang Su, Jun Zhu

Figure 1 for Robust Classification via a Single Diffusion Model
Figure 2 for Robust Classification via a Single Diffusion Model
Figure 3 for Robust Classification via a Single Diffusion Model
Figure 4 for Robust Classification via a Single Diffusion Model

Recently, diffusion models have been successfully applied to improving adversarial robustness of image classifiers by purifying the adversarial noises or generating realistic data for adversarial training. However, the diffusion-based purification can be evaded by stronger adaptive attacks while adversarial training does not perform well under unseen threats, exhibiting inevitable limitations of these methods. To better harness the expressive power of diffusion models, in this paper we propose Robust Diffusion Classifier (RDC), a generative classifier that is constructed from a pre-trained diffusion model to be adversarially robust. Our method first maximizes the data likelihood of a given input and then predicts the class probabilities of the optimized input using the conditional likelihood of the diffusion model through Bayes' theorem. Since our method does not require training on particular adversarial attacks, we demonstrate that it is more generalizable to defend against multiple unseen threats. In particular, RDC achieves $73.24\%$ robust accuracy against $\ell_\infty$ norm-bounded perturbations with $\epsilon_\infty=8/255$ on CIFAR-10, surpassing the previous state-of-the-art adversarial training models by $+2.34\%$. The findings highlight the potential of generative classifiers by employing diffusion models for adversarial robustness compared with the commonly studied discriminative classifiers.

Viaarxiv icon