Alert button
Picture for Qi Yi

Qi Yi

Alert button

Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning

Sep 04, 2023
Shaohui Peng, Xing Hu, Qi Yi, Rui Zhang, Jiaming Guo, Di Huang, Zikang Tian, Ruizhi Chen, Zidong Du, Qi Guo, Yunji Chen, Ling Li

Figure 1 for Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning
Figure 2 for Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning
Figure 3 for Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning
Figure 4 for Self-driven Grounding: Large Language Model Agents with Automatical Language-aligned Skill Learning

Large language models (LLMs) show their powerful automatic reasoning and planning capability with a wealth of semantic knowledge about the human world. However, the grounding problem still hinders the applications of LLMs in the real-world environment. Existing studies try to fine-tune the LLM or utilize pre-defined behavior APIs to bridge the LLMs and the environment, which not only costs huge human efforts to customize for every single task but also weakens the generality strengths of LLMs. To autonomously ground the LLM onto the environment, we proposed the Self-Driven Grounding (SDG) framework to automatically and progressively ground the LLM with self-driven skill learning. SDG first employs the LLM to propose the hypothesis of sub-goals to achieve tasks and then verify the feasibility of the hypothesis via interacting with the underlying environment. Once verified, SDG can then learn generalized skills with the guidance of these successfully grounded subgoals. These skills can be further utilized to accomplish more complex tasks which fail to pass the verification phase. Verified in the famous instruction following task set-BabyAI, SDG achieves comparable performance in the most challenging tasks compared with imitation learning methods that cost millions of demonstrations, proving the effectiveness of learned skills and showing the feasibility and efficiency of our framework.

Viaarxiv icon

Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks

Jul 14, 2023
Jiaming Zhang, Jitao Sang, Qi Yi, Changsheng Xu

Figure 1 for Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks
Figure 2 for Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks
Figure 3 for Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks
Figure 4 for Introducing Foundation Models as Surrogate Models: Advancing Towards More Practical Adversarial Attacks

Recently, the no-box adversarial attack, in which the attacker lacks access to the model's architecture, weights, and training data, become the most practical and challenging attack setup. However, there is an unawareness of the potential and flexibility inherent in the surrogate model selection process on no-box setting. Inspired by the burgeoning interest in utilizing foundational models to address downstream tasks, this paper adopts an innovative idea that 1) recasting adversarial attack as a downstream task. Specifically, image noise generation to meet the emerging trend and 2) introducing foundational models as surrogate models. Harnessing the concept of non-robust features, we elaborate on two guiding principles for surrogate model selection to explain why the foundational model is an optimal choice for this role. However, paradoxically, we observe that these foundational models underperform. Analyzing this unexpected behavior within the feature space, we attribute the lackluster performance of foundational models (e.g., CLIP) to their significant representational capacity and, conversely, their lack of discriminative prowess. To mitigate this issue, we propose the use of a margin-based loss strategy for the fine-tuning of foundational models on target images. The experimental results verify that our approach, which employs the basic Fast Gradient Sign Method (FGSM) attack algorithm, outstrips the performance of other, more convoluted algorithms. We conclude by advocating for the research community to consider surrogate models as crucial determinants in the effectiveness of adversarial attacks in no-box settings. The implications of our work bear relevance for improving the efficacy of such adversarial attacks and the overall robustness of AI systems.

Viaarxiv icon

Online Prototype Alignment for Few-shot Policy Transfer

Jun 12, 2023
Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Yunkai Gao, Kaizhao Yuan, Ruizhi Chen, Siming Lan, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen

Figure 1 for Online Prototype Alignment for Few-shot Policy Transfer
Figure 2 for Online Prototype Alignment for Few-shot Policy Transfer
Figure 3 for Online Prototype Alignment for Few-shot Policy Transfer
Figure 4 for Online Prototype Alignment for Few-shot Policy Transfer

Domain adaptation in reinforcement learning (RL) mainly deals with the changes of observation when transferring the policy to a new environment. Many traditional approaches of domain adaptation in RL manage to learn a mapping function between the source and target domain in explicit or implicit ways. However, they typically require access to abundant data from the target domain. Besides, they often rely on visual clues to learn the mapping function and may fail when the source domain looks quite different from the target domain. To address these problems, we propose a novel framework Online Prototype Alignment (OPA) to learn the mapping function based on the functional similarity of elements and is able to achieve the few-shot policy transfer within only several episodes. The key insight of OPA is to introduce an exploration mechanism that can interact with the unseen elements of the target domain in an efficient and purposeful manner, and then connect them with the seen elements in the source domain according to their functionalities (instead of visual clues). Experimental results show that when the target domain looks visually different from the source domain, OPA can achieve better transfer performance even with much fewer samples from the target domain, outperforming prior methods.

* This paper has been accepted at ICML2023 
Viaarxiv icon

Conceptual Reinforcement Learning for Language-Conditioned Tasks

Mar 09, 2023
Shaohui Peng, Xing Hu, Rui Zhang, Jiaming Guo, Qi Yi, Ruizhi Chen, Zidong Du, Ling Li, Qi Guo, Yunji Chen

Figure 1 for Conceptual Reinforcement Learning for Language-Conditioned Tasks
Figure 2 for Conceptual Reinforcement Learning for Language-Conditioned Tasks
Figure 3 for Conceptual Reinforcement Learning for Language-Conditioned Tasks
Figure 4 for Conceptual Reinforcement Learning for Language-Conditioned Tasks

Despite the broad application of deep reinforcement learning (RL), transferring and adapting the policy to unseen but similar environments is still a significant challenge. Recently, the language-conditioned policy is proposed to facilitate policy transfer through learning the joint representation of observation and text that catches the compact and invariant information across environments. Existing studies of language-conditioned RL methods often learn the joint representation as a simple latent layer for the given instances (episode-specific observation and text), which inevitably includes noisy or irrelevant information and cause spurious correlations that are dependent on instances, thus hurting generalization performance and training efficiency. To address this issue, we propose a conceptual reinforcement learning (CRL) framework to learn the concept-like joint representation for language-conditioned policy. The key insight is that concepts are compact and invariant representations in human cognition through extracting similarities from numerous instances in real-world. In CRL, we propose a multi-level attention encoder and two mutual information constraints for learning compact and invariant concepts. Verified in two challenging environments, RTFM and Messenger, CRL significantly improves the training efficiency (up to 70%) and generalization ability (up to 30%) to the new environment dynamics.

* Accepted by AAAI 2023 
Viaarxiv icon

Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

Dec 31, 2022
Jiaming Zhang, Xingjun Ma, Qi Yi, Jitao Sang, Yugang Jiang, Yaowei Wang, Changsheng Xu

Figure 1 for Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Figure 2 for Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Figure 3 for Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Figure 4 for Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples

There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet. UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models. UEs typically are generated via a bilevel optimization framework with a surrogate model to remove (minimize) errors from the original samples, and then applied to protect the data against unknown target models. However, existing UE generation methods all rely on an ideal assumption called label-consistency, where the hackers and protectors are assumed to hold the same label for a given sample. In this work, we propose and promote a more practical label-agnostic setting, where the hackers may exploit the protected data quite differently from the protectors. E.g., a m-class unlearnable dataset held by the protector may be exploited by the hacker as a n-class dataset. Existing UE generation methods are rendered ineffective in this challenging setting. To tackle this challenge, we present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations. Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains. We empirically verify the effectiveness of our proposed approach under a variety of settings with different datasets, target models, and even commercial platforms Microsoft Azure and Baidu PaddlePaddle.

Viaarxiv icon

Causality-driven Hierarchical Structure Discovery for Reinforcement Learning

Oct 13, 2022
Shaohui Peng, Xing Hu, Rui Zhang, Ke Tang, Jiaming Guo, Qi Yi, Ruizhi Chen, Xishan Zhang, Zidong Du, Ling Li, Qi Guo, Yunji Chen

Figure 1 for Causality-driven Hierarchical Structure Discovery for Reinforcement Learning
Figure 2 for Causality-driven Hierarchical Structure Discovery for Reinforcement Learning
Figure 3 for Causality-driven Hierarchical Structure Discovery for Reinforcement Learning
Figure 4 for Causality-driven Hierarchical Structure Discovery for Reinforcement Learning

Hierarchical reinforcement learning (HRL) effectively improves agents' exploration efficiency on tasks with sparse reward, with the guide of high-quality hierarchical structures (e.g., subgoals or options). However, how to automatically discover high-quality hierarchical structures is still a great challenge. Previous HRL methods can hardly discover the hierarchical structures in complex environments due to the low exploration efficiency by exploiting the randomness-driven exploration paradigm. To address this issue, we propose CDHRL, a causality-driven hierarchical reinforcement learning framework, leveraging a causality-driven discovery instead of a randomness-driven exploration to effectively build high-quality hierarchical structures in complicated environments. The key insight is that the causalities among environment variables are naturally fit for modeling reachable subgoals and their dependencies and can perfectly guide to build high-quality hierarchical structures. The results in two complex environments, 2D-Minecraft and Eden, show that CDHRL significantly boosts exploration efficiency with the causality-driven paradigm.

* Accepted by NeurIPS 2022 
Viaarxiv icon

Object-Category Aware Reinforcement Learning

Oct 13, 2022
Qi Yi, Rui Zhang, Shaohui Peng, Jiaming Guo, Xing Hu, Zidong Du, Xishan Zhang, Qi Guo, Yunji Chen

Figure 1 for Object-Category Aware Reinforcement Learning
Figure 2 for Object-Category Aware Reinforcement Learning
Figure 3 for Object-Category Aware Reinforcement Learning
Figure 4 for Object-Category Aware Reinforcement Learning

Object-oriented reinforcement learning (OORL) is a promising way to improve the sample efficiency and generalization ability over standard RL. Recent works that try to solve OORL tasks without additional feature engineering mainly focus on learning the object representations and then solving tasks via reasoning based on these object representations. However, none of these works tries to explicitly model the inherent similarity between different object instances of the same category. Objects of the same category should share similar functionalities; therefore, the category is the most critical property of an object. Following this insight, we propose a novel framework named Object-Category Aware Reinforcement Learning (OCARL), which utilizes the category information of objects to facilitate both perception and reasoning. OCARL consists of three parts: (1) Category-Aware Unsupervised Object Discovery (UOD), which discovers the objects as well as their corresponding categories; (2) Object-Category Aware Perception, which encodes the category information and is also robust to the incompleteness of (1) at the same time; (3) Object-Centric Modular Reasoning, which adopts multiple independent and object-category-specific networks when reasoning based on objects. Our experiments show that OCARL can improve both the sample efficiency and generalization in the OORL domain.

* This paper is to be published on NeurIPS 2022 
Viaarxiv icon

JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

Jun 19, 2022
Jiaming Zhang, Qi Yi, Jitao Sang

Figure 1 for JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System
Figure 2 for JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System
Figure 3 for JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System
Figure 4 for JPEG Compression-Resistant Low-Mid Adversarial Perturbation against Unauthorized Face Recognition System

It has been observed that the unauthorized use of face recognition system raises privacy problems. Using adversarial perturbations provides one possible solution to address this issue. A critical issue to exploit adversarial perturbation against unauthorized face recognition system is that: The images uploaded to the web need to be processed by JPEG compression, which weakens the effectiveness of adversarial perturbation. Existing JPEG compression-resistant methods fails to achieve a balance among compression resistance, transferability, and attack effectiveness. To this end, we propose a more natural solution called low frequency adversarial perturbation (LFAP). Instead of restricting the adversarial perturbations, we turn to regularize the source model to employing more low-frequency features by adversarial training. Moreover, to better influence model in different frequency components, we proposed the refined low-mid frequency adversarial perturbation (LMFAP) considering the mid frequency components as the productive complement. We designed a variety of settings in this study to simulate the real-world application scenario, including cross backbones, supervisory heads, training datasets and testing datasets. Quantitative and qualitative experimental results validate the effectivenss of proposed solutions.

Viaarxiv icon

Towards Adversarial Attack on Vision-Language Pre-training Models

Jun 19, 2022
Jiaming Zhang, Qi Yi, Jitao Sang

Figure 1 for Towards Adversarial Attack on Vision-Language Pre-training Models
Figure 2 for Towards Adversarial Attack on Vision-Language Pre-training Models
Figure 3 for Towards Adversarial Attack on Vision-Language Pre-training Models
Figure 4 for Towards Adversarial Attack on Vision-Language Pre-training Models

While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios.

Viaarxiv icon