Alert button
Picture for Wei Xie

Wei Xie

Alert button

Nonrigid Object Contact Estimation With Regional Unwrapping Transformer

Aug 30, 2023
Wei Xie, Zimeng Zhao, Shiying Li, Binghui Zuo, Yangang Wang

Figure 1 for Nonrigid Object Contact Estimation With Regional Unwrapping Transformer
Figure 2 for Nonrigid Object Contact Estimation With Regional Unwrapping Transformer
Figure 3 for Nonrigid Object Contact Estimation With Regional Unwrapping Transformer
Figure 4 for Nonrigid Object Contact Estimation With Regional Unwrapping Transformer

Acquiring contact patterns between hands and nonrigid objects is a common concern in the vision and robotics community. However, existing learning-based methods focus more on contact with rigid ones from monocular images. When adopting them for nonrigid contact, a major problem is that the existing contact representation is restricted by the geometry of the object. Consequently, contact neighborhoods are stored in an unordered manner and contact features are difficult to align with image cues. At the core of our approach lies a novel hand-object contact representation called RUPs (Region Unwrapping Profiles), which unwrap the roughly estimated hand-object surfaces as multiple high-resolution 2D regional profiles. The region grouping strategy is consistent with the hand kinematic bone division because they are the primitive initiators for a composite contact pattern. Based on this representation, our Regional Unwrapping Transformer (RUFormer) learns the correlation priors across regions from monocular inputs and predicts corresponding contact and deformed transformations. Our experiments demonstrate that the proposed framework can robustly estimate the deformed degrees and deformed transformations, which makes it suitable for both nonrigid and rigid contact.

* Accepted by ICCV2023 
Viaarxiv icon

Reconstructing Interacting Hands with Interaction Prior from Monocular Images

Aug 27, 2023
Binghui Zuo, Zimeng Zhao, Wenqian Sun, Wei Xie, Zhou Xue, Yangang Wang

Reconstructing interacting hands from monocular images is indispensable in AR/VR applications. Most existing solutions rely on the accurate localization of each skeleton joint. However, these methods tend to be unreliable due to the severe occlusion and confusing similarity among adjacent hand parts. This also defies human perception because humans can quickly imitate an interaction pattern without localizing all joints. Our key idea is to first construct a two-hand interaction prior and recast the interaction reconstruction task as the conditional sampling from the prior. To expand more interaction states, a large-scale multimodal dataset with physical plausibility is proposed. Then a VAE is trained to further condense these interaction patterns as latent codes in a prior distribution. When looking for image cues that contribute to interaction prior sampling, we propose the interaction adjacency heatmap (IAH). Compared with a joint-wise heatmap for localization, IAH assigns denser visible features to those invisible joints. Compared with an all-in-one visible heatmap, it provides more fine-grained local interaction information in each interaction region. Finally, the correlations between the extracted features and corresponding interaction codes are linked by the ViT module. Comprehensive evaluations on benchmark datasets have verified the effectiveness of this framework. The code and dataset are publicly available at https://github.com/binghui-z/InterPrior_pytorch

* Accepted by ICCV2023 
Viaarxiv icon

Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models

Aug 25, 2023
Zhenhua Wang, Wei Xie, Kai Chen, Baosheng Wang, Zhiwen Gui, Enze Wang

Figure 1 for Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models
Figure 2 for Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models
Figure 3 for Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models
Figure 4 for Self-Deception: Reverse Penetrating the Semantic Firewall of Large Language Models

Large language models (LLMs), such as ChatGPT, have emerged with astonishing capabilities approaching artificial general intelligence. While providing convenience for various societal needs, LLMs have also lowered the cost of generating harmful content. Consequently, LLM developers have deployed semantic-level defenses to recognize and reject prompts that may lead to inappropriate content. Unfortunately, these defenses are not foolproof, and some attackers have crafted "jailbreak" prompts that temporarily hypnotize the LLM into forgetting content defense rules and answering any improper questions. To date, there is no clear explanation of the principles behind these semantic-level attacks and defenses in both industry and academia. This paper investigates the LLM jailbreak problem and proposes an automatic jailbreak method for the first time. We propose the concept of a semantic firewall and provide three technical implementation approaches. Inspired by the attack that penetrates traditional firewalls through reverse tunnels, we introduce a "self-deception" attack that can bypass the semantic firewall by inducing LLM to generate prompts that facilitate jailbreak. We generated a total of 2,520 attack payloads in six languages (English, Russian, French, Spanish, Chinese, and Arabic) across seven virtual scenarios, targeting the three most common types of violations: violence, hate, and pornography. The experiment was conducted on two models, namely the GPT-3.5-Turbo and GPT-4. The success rates on the two models were 86.2% and 67%, while the failure rates were 4.7% and 2.2%, respectively. This highlighted the effectiveness of the proposed attack method. All experimental code and raw data will be released as open-source to inspire future research. We believe that manipulating AI behavior through carefully crafted prompts will become an important research direction in the future.

* Serious errors were found in the experiment, which may lead to the overturning of the overall conclusions of the paper 
Viaarxiv icon

Implicit Obstacle Map-driven Indoor Navigation Model for Robust Obstacle Avoidance

Aug 24, 2023
Wei Xie, Haobo Jiang, Shuo Gu, Jin Xie

Robust obstacle avoidance is one of the critical steps for successful goal-driven indoor navigation tasks.Due to the obstacle missing in the visual image and the possible missed detection issue, visual image-based obstacle avoidance techniques still suffer from unsatisfactory robustness. To mitigate it, in this paper, we propose a novel implicit obstacle map-driven indoor navigation framework for robust obstacle avoidance, where an implicit obstacle map is learned based on the historical trial-and-error experience rather than the visual image. In order to further improve the navigation efficiency, a non-local target memory aggregation module is designed to leverage a non-local network to model the intrinsic relationship between the target semantic and the target orientation clues during the navigation process so as to mine the most target-correlated object clues for the navigation decision. Extensive experimental results on AI2-Thor and RoboTHOR benchmarks verify the excellent obstacle avoidance and navigation efficiency of our proposed method. The core source code is available at https://github.com/xwaiyy123/object-navigation.

* 9 pages, 7 figures, 43 references. This paper has been accepted for ACM MM 2023 
Viaarxiv icon

Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet

Jun 03, 2023
Yanxiong Li, Wenchang Cao, Wei Xie, Qisheng Huang, Wenfeng Pang, Qianhua He

Figure 1 for Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet
Figure 2 for Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet
Figure 3 for Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet
Figure 4 for Low-Complexity Acoustic Scene Classification Using Data Augmentation and Lightweight ResNet

We present a work on low-complexity acoustic scene classification (ASC) with multiple devices, namely the subtask A of Task 1 of the DCASE2021 challenge. This subtask focuses on classifying audio samples of multiple devices with a low-complexity model, where two main difficulties need to be overcome. First, the audio samples are recorded by different devices, and there is mismatch of recording devices in audio samples. We reduce the negative impact of the mismatch of recording devices by using some effective strategies, including data augmentation (e.g., mix-up, spectrum correction, pitch shift), usages of multi-patch network structure and channel attention. Second, the model size should be smaller than a threshold (e.g., 128 KB required by the DCASE2021 challenge). To meet this condition, we adopt a ResNet with both depthwise separable convolution and channel attention as the backbone network, and perform model compression. In summary, we propose a low-complexity ASC method using data augmentation and a lightweight ResNet. Evaluated on the official development and evaluation datasets, our method obtains classification accuracy scores of 71.6% and 66.7%, respectively; and obtains Log-loss scores of 1.038 and 1.136, respectively. Our final model size is 110.3 KB which is smaller than the maximum of 128 KB.

* 5 pages, 5 figures, 4 tables. Accepted for publication in the 16th IEEE International Conference on Signal Processing (IEEE ICSP) 
Viaarxiv icon

Few-shot Class-incremental Audio Classification Using Stochastic Classifier

Jun 03, 2023
Yanxiong Li, Wenchang Cao, Jialong Li, Wei Xie, Qianhua He

Figure 1 for Few-shot Class-incremental Audio Classification Using Stochastic Classifier
Figure 2 for Few-shot Class-incremental Audio Classification Using Stochastic Classifier
Figure 3 for Few-shot Class-incremental Audio Classification Using Stochastic Classifier
Figure 4 for Few-shot Class-incremental Audio Classification Using Stochastic Classifier

It is generally assumed that number of classes is fixed in current audio classification methods, and the model can recognize pregiven classes only. When new classes emerge, the model needs to be retrained with adequate samples of all classes. If new classes continually emerge, these methods will not work well and even infeasible. In this study, we propose a method for fewshot class-incremental audio classification, which continually recognizes new classes and remember old ones. The proposed model consists of an embedding extractor and a stochastic classifier. The former is trained in base session and frozen in incremental sessions, while the latter is incrementally expanded in all sessions. Two datasets (NS-100 and LS-100) are built by choosing samples from audio corpora of NSynth and LibriSpeech, respectively. Results show that our method exceeds four baseline ones in average accuracy and performance dropping rate. Code is at https://github.com/vinceasvp/meta-sc.

* 5 pages, 3 figures, 4 tables. Accepted for publication in INTERSPEECH 2023 
Viaarxiv icon

Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes

May 31, 2023
Yanxiong Li, Wenchang Cao, Wei Xie, Jialong Li, Emmanouil Benetos

Figure 1 for Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes
Figure 2 for Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes
Figure 3 for Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes
Figure 4 for Few-shot Class-incremental Audio Classification Using Dynamically Expanded Classifier with Self-attention Modified Prototypes

Most existing methods for audio classification assume that the vocabulary of audio classes to be classified is fixed. When novel (unseen) audio classes appear, audio classification systems need to be retrained with abundant labeled samples of all audio classes for recognizing base (initial) and novel audio classes. If novel audio classes continue to appear, the existing methods for audio classification will be inefficient and even infeasible. In this work, we propose a method for few-shot class-incremental audio classification, which can continually recognize novel audio classes without forgetting old ones. The framework of our method mainly consists of two parts: an embedding extractor and a classifier, and their constructions are decoupled. The embedding extractor is the backbone of a ResNet based network, which is frozen after construction by a training strategy using only samples of base audio classes. However, the classifier consisting of prototypes is expanded by a prototype adaptation network with few samples of novel audio classes in incremental sessions. Labeled support samples and unlabeled query samples are used to train the prototype adaptation network and update the classifier, since they are informative for audio classification. Three audio datasets, named NSynth-100, FSC-89 and LS-100 are built by choosing samples from audio corpora of NSynth, FSD-MIX-CLIP and LibriSpeech, respectively. Results show that our method exceeds baseline methods in average accuracy and performance dropping rate. In addition, it is competitive compared to baseline methods in computational complexity and memory requirement. The code for our method is given at https://github.com/vinceasvp/FCAC.

* 13 pages, 8 figures, 12 tables. Accepted for publication in IEEE TMM 
Viaarxiv icon

Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes

May 29, 2023
Wei Xie, Yanxiong Li, Qianhua He, Wenchang Cao, Tuomas Virtanen

Figure 1 for Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes
Figure 2 for Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes
Figure 3 for Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes
Figure 4 for Few-shot Class-incremental Audio Classification Using Adaptively-refined Prototypes

New classes of sounds constantly emerge with a few samples, making it challenging for models to adapt to dynamic acoustic environments. This challenge motivates us to address the new problem of few-shot class-incremental audio classification. This study aims to enable a model to continuously recognize new classes of sounds with a few training samples of new classes while remembering the learned ones. To this end, we propose a method to generate discriminative prototypes and use them to expand the model's classifier for recognizing sounds of new and learned classes. The model is first trained with a random episodic training strategy, and then its backbone is used to generate the prototypes. A dynamic relation projection module refines the prototypes to enhance their discriminability. Results on two datasets (derived from the corpora of Nsynth and FSD-MIX-CLIPS) show that the proposed method exceeds three state-of-the-art methods in average accuracy and performance dropping rate.

* 5 pages,2 figures, Accepted by Interspeech 2023 
Viaarxiv icon

HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects

Jan 18, 2023
Wei Xie, Zhipeng Yu, Zimeng Zhao, Binghui Zuo, Yangang Wang

Figure 1 for HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects
Figure 2 for HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects
Figure 3 for HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects
Figure 4 for HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects

We construct the first markerless deformable interaction dataset recording interactive motions of the hands and deformable objects, called HMDO (Hand Manipulation with Deformable Objects). With our built multi-view capture system, it captures the deformable interactions with multiple perspectives, various object shapes, and diverse interactive forms. Our motivation is the current lack of hand and deformable object interaction datasets, as 3D hand and deformable object reconstruction is challenging. Mainly due to mutual occlusion, the interaction area is difficult to observe, the visual features between the hand and the object are entangled, and the reconstruction of the interaction area deformation is difficult. To tackle this challenge, we propose a method to annotate our captured data. Our key idea is to collaborate with estimated hand features to guide the object global pose estimation, and then optimize the deformation process of the object by analyzing the relationship between the hand and the object. Through comprehensive evaluation, the proposed method can reconstruct interactive motions of hands and deformable objects with high quality. HMDO currently consists of 21600 frames over 12 sequences. In the future, this dataset could boost the research of learning-based reconstruction of deformable interaction scenes.

Viaarxiv icon