Alert button
Picture for Xiangmin Xu

Xiangmin Xu

Alert button

Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition

Jul 20, 2023
Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu

Figure 1 for Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition
Figure 2 for Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition
Figure 3 for Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition
Figure 4 for Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition

This paper presents a paradigm that adapts general large-scale pretrained models (PTMs) to speech emotion recognition task. Although PTMs shed new light on artificial general intelligence, they are constructed with general tasks in mind, and thus, their efficacy for specific tasks can be further improved. Additionally, employing PTMs in practical applications can be challenging due to their considerable size. Above limitations spawn another research direction, namely, optimizing large-scale PTMs for specific tasks to generate task-specific PTMs that are both compact and effective. In this paper, we focus on the speech emotion recognition task and propose an improved emotion-specific pretrained encoder called Vesper. Vesper is pretrained on a speech dataset based on WavLM and takes into account emotional characteristics. To enhance sensitivity to emotional information, Vesper employs an emotion-guided masking strategy to identify the regions that need masking. Subsequently, Vesper employs hierarchical and cross-layer self-supervision to improve its ability to capture acoustic and semantic representations, both of which are crucial for emotion recognition. Experimental results on the IEMOCAP, MELD, and CREMA-D datasets demonstrate that Vesper with 4 layers outperforms WavLM Base with 12 layers, and the performance of Vesper with 12 layers surpasses that of WavLM Large with 24 layers.

* 13 pages, 5 figures, 8 tables 
Viaarxiv icon

DWFormer: Dynamic Window transFormer for Speech Emotion Recognition

Mar 03, 2023
Shuaiqi Chen, Xiaofen Xing, Weibin Zhang, Weidong Chen, Xiangmin Xu

Figure 1 for DWFormer: Dynamic Window transFormer for Speech Emotion Recognition
Figure 2 for DWFormer: Dynamic Window transFormer for Speech Emotion Recognition
Figure 3 for DWFormer: Dynamic Window transFormer for Speech Emotion Recognition
Figure 4 for DWFormer: Dynamic Window transFormer for Speech Emotion Recognition

Speech emotion recognition is crucial to human-computer interaction. The temporal regions that represent different emotions scatter in different parts of the speech locally. Moreover, the temporal scales of important information may vary over a large range within and across speech segments. Although transformer-based models have made progress in this field, the existing models could not precisely locate important regions at different temporal scales. To address the issue, we propose Dynamic Window transFormer (DWFormer), a new architecture that leverages temporal importance by dynamically splitting samples into windows. Self-attention mechanism is applied within windows for capturing temporal important information locally in a fine-grained way. Cross-window information interaction is also taken into account for global communication. DWFormer is evaluated on both the IEMOCAP and the MELD datasets. Experimental results show that the proposed model achieves better performance than the previous state-of-the-art methods.

* 4 pages, 5 figures, 3 tables, accepted by 2023 International Conference on Acoustics, Speech, and Signal Processing (ICASSP2023) 
Viaarxiv icon

DST: Deformable Speech Transformer for Emotion Recognition

Feb 27, 2023
Weidong Chen, Xiaofen Xing, Xiangmin Xu, Jianxin Pang, Lan Du

Figure 1 for DST: Deformable Speech Transformer for Emotion Recognition
Figure 2 for DST: Deformable Speech Transformer for Emotion Recognition
Figure 3 for DST: Deformable Speech Transformer for Emotion Recognition
Figure 4 for DST: Deformable Speech Transformer for Emotion Recognition

Enabled by multi-head self-attention, Transformer has exhibited remarkable results in speech emotion recognition (SER). Compared to the original full attention mechanism, window-based attention is more effective in learning fine-grained features while greatly reducing model redundancy. However, emotional cues are present in a multi-granularity manner such that the pre-defined fixed window can severely degrade the model flexibility. In addition, it is difficult to obtain the optimal window settings manually. In this paper, we propose a Deformable Speech Transformer, named DST, for SER task. DST determines the usage of window sizes conditioned on input speech via a light-weight decision network. Meanwhile, data-dependent offsets derived from acoustic features are utilized to adjust the positions of the attention windows, allowing DST to adaptively discover and attend to the valuable information embedded in the speech. Extensive experiments on IEMOCAP and MELD demonstrate the superiority of DST.

* 5 pages, 4 figures, 2tables, accepted by ICASSP 2023 
Viaarxiv icon

SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing

Feb 27, 2023
Weidong Chen, Xiaofen Xing, Xiangmin Xu, Jianxin Pang, Lan Du

Figure 1 for SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing
Figure 2 for SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing
Figure 3 for SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing
Figure 4 for SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing

Paralinguistic speech processing is important in addressing many issues, such as sentiment and neurocognitive disorder analyses. Recently, Transformer has achieved remarkable success in the natural language processing field and has demonstrated its adaptation to speech. However, previous works on Transformer in the speech field have not incorporated the properties of speech, leaving the full potential of Transformer unexplored. In this paper, we consider the characteristics of speech and propose a general structure-based framework, called SpeechFormer++, for paralinguistic speech processing. More concretely, following the component relationship in the speech signal, we design a unit encoder to model the intra- and inter-unit information (i.e., frames, phones, and words) efficiently. According to the hierarchical relationship, we utilize merging blocks to generate features at different granularities, which is consistent with the structural pattern in the speech signal. Moreover, a word encoder is introduced to integrate word-grained features into each unit encoder, which effectively balances fine-grained and coarse-grained information. SpeechFormer++ is evaluated on the speech emotion recognition (IEMOCAP & MELD), depression classification (DAIC-WOZ) and Alzheimer's disease detection (Pitt) tasks. The results show that SpeechFormer++ outperforms the standard Transformer while greatly reducing the computational cost. Furthermore, it delivers superior results compared to the state-of-the-art approaches.

* 14 pages, 7 figures, 14 tables, TASLP 2023 paper 
Viaarxiv icon

Superpoint Transformer for 3D Scene Instance Segmentation

Nov 28, 2022
Jiahao Sun, Chunmei Qing, Junpeng Tan, Xiangmin Xu

Figure 1 for Superpoint Transformer for 3D Scene Instance Segmentation
Figure 2 for Superpoint Transformer for 3D Scene Instance Segmentation
Figure 3 for Superpoint Transformer for 3D Scene Instance Segmentation
Figure 4 for Superpoint Transformer for 3D Scene Instance Segmentation

Most existing methods realize 3D instance segmentation by extending those models used for 3D object detection or 3D semantic segmentation. However, these non-straightforward methods suffer from two drawbacks: 1) Imprecise bounding boxes or unsatisfactory semantic predictions limit the performance of the overall 3D instance segmentation framework. 2) Existing method requires a time-consuming intermediate step of aggregation. To address these issues, this paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer. It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors without relying on the results of object detection or semantic segmentation. The key step in this framework is a novel query decoder with transformers that can capture the instance information through the superpoint cross-attention mechanism and generate the superpoint masks of the instances. Through bipartite matching based on superpoint masks, SPFormer can implement the network training without the intermediate aggregation step, which accelerates the network. Extensive experiments on ScanNetv2 and S3DIS benchmarks verify that our method is concise yet efficient. Notably, SPFormer exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously. Code is available at https://github.com/sunjiahao1999/SPFormer.

Viaarxiv icon

Context Sensing Attention Network for Video-based Person Re-identification

Jul 06, 2022
Kan Wang, Changxing Ding, Jianxin Pang, Xiangmin Xu

Figure 1 for Context Sensing Attention Network for Video-based Person Re-identification
Figure 2 for Context Sensing Attention Network for Video-based Person Re-identification
Figure 3 for Context Sensing Attention Network for Video-based Person Re-identification
Figure 4 for Context Sensing Attention Network for Video-based Person Re-identification

Video-based person re-identification (ReID) is challenging due to the presence of various interferences in video frames. Recent approaches handle this problem using temporal aggregation strategies. In this work, we propose a novel Context Sensing Attention Network (CSA-Net), which improves both the frame feature extraction and temporal aggregation steps. First, we introduce the Context Sensing Channel Attention (CSCA) module, which emphasizes responses from informative channels for each frame. These informative channels are identified with reference not only to each individual frame, but also to the content of the entire sequence. Therefore, CSCA explores both the individuality of each frame and the global context of the sequence. Second, we propose the Contrastive Feature Aggregation (CFA) module, which predicts frame weights for temporal aggregation. Here, the weight for each frame is determined in a contrastive manner: i.e., not only by the quality of each individual frame, but also by the average quality of the other frames in a sequence. Therefore, it effectively promotes the contribution of relatively good frames. Extensive experimental results on four datasets show that CSA-Net consistently achieves state-of-the-art performance.

Viaarxiv icon

CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI

May 29, 2022
Yirong Chen, Weiquan Fan, Xiaofen Xing, Jianxin Pang, Minlie Huang, Wenjing Han, Qianfeng Tie, Xiangmin Xu

Figure 1 for CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Figure 2 for CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Figure 3 for CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI
Figure 4 for CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI

Human language expression is based on the subjective construal of the situation instead of the objective truth conditions, which means that speakers' personalities and emotions after cognitive processing have an important influence on conversation. However, most existing datasets for conversational AI ignore human personalities and emotions, or only consider part of them. It's difficult for dialogue systems to understand speakers' personalities and emotions although large-scale pre-training language models have been widely used. In order to consider both personalities and emotions in the process of conversation generation, we propose CPED, a large-scale Chinese personalized and emotional dialogue dataset, which consists of multi-source knowledge related to empathy and personal characteristic. These knowledge covers gender, Big Five personality traits, 13 emotions, 19 dialogue acts and 10 scenes. CPED contains more than 12K dialogues of 392 speakers from 40 TV shows. We release the textual dataset with audio features and video features according to the copyright claims, privacy issues, terms of service of video platforms. We provide detailed description of the CPED construction process and introduce three tasks for conversational AI, including personality recognition, emotion recognition in conversations as well as personalized and emotional conversation generation. Finally, we provide baseline systems for these tasks and consider the function of speakers' personalities and emotions on conversation. Our motivation is to propose a dataset to be widely adopted by the NLP community as a new open benchmark for conversational AI research. The full dataset is available at https://github.com/scutcyr/CPED.

Viaarxiv icon

Compact Model Training by Low-Rank Projection with Energy Transfer

Apr 12, 2022
Kailing Guo, Zhenquan Lin, Xiaofen Xing, Fang Liu, Xiangmin Xu

Figure 1 for Compact Model Training by Low-Rank Projection with Energy Transfer
Figure 2 for Compact Model Training by Low-Rank Projection with Energy Transfer
Figure 3 for Compact Model Training by Low-Rank Projection with Energy Transfer
Figure 4 for Compact Model Training by Low-Rank Projection with Energy Transfer

Low-rankness plays an important role in traditional machine learning, but is not so popular in deep learning. Most previous low-rank network compression methods compress the networks by approximating pre-trained models and re-training. However, optimal solution in the Euclidean space may be quite different from the one in the low-rank manifold. A well pre-trained model is not a good initialization for the model with low-rank constraint. Thus, the performance of low-rank compressed network degrades significantly. Compared to other network compression methods such as pruning, low-rank methods attracts less attention in recent years. In this paper, we devise a new training method, low-rank projection with energy transfer (LRPET), that trains low-rank compressed networks from scratch and achieves competitive performance. First, we propose to alternately perform stochastic gradient descent training and projection onto the low-rank manifold. This asymptotically approaches the optimal solution in the low-rank manifold. Compared to re-training on compact model, this enables fully utilization of model capacity since solution space is relaxed back to Euclidean space after projection. Second, the matrix energy (the sum of squares of singular values) reduction caused by projection is compensated by energy transfer. We uniformly transfer the energy of the pruned singular values to the remaining ones. We theoretically show that energy transfer eases the trend of gradient vanishing caused by projection. Comprehensive experiment on CIFAR-10 and ImageNet have justified that our method is superior to other low-rank compression methods and also outperforms recent state-of-the-art pruning methods.

Viaarxiv icon

SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech

Mar 10, 2022
Weidong Chen, Xiaofen Xing, Xiangmin Xu, Jianxin Pang, Lan Du

Figure 1 for SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech
Figure 2 for SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech
Figure 3 for SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech
Figure 4 for SpeechFormer: A Hierarchical Efficient Framework Incorporating the Characteristics of Speech

Transformer has obtained promising results on cognitive speech signal processing field, which is of interest in various applications ranging from emotion to neurocognitive disorder analysis. However, most works treat speech signal as a whole, leading to the neglect of the pronunciation structure that is unique to speech and reflects the cognitive process. Meanwhile, Transformer has heavy computational burden due to its full attention operation. In this paper, a hierarchical efficient framework, called SpeechFormer, which considers the structural characteristics of speech, is proposed and can be served as a general-purpose backbone for cognitive speech signal processing. The proposed SpeechFormer consists of frame, phoneme, word and utterance stages in succession, each performing a neighboring attention according to the structural pattern of speech with high computational efficiency. SpeechFormer is evaluated on speech emotion recognition (IEMOCAP & MELD) and neurocognitive disorder detection (Pitt & DAIC-WOZ) tasks, and the results show that SpeechFormer outperforms the standard Transformer-based framework while greatly reducing the computational cost. Furthermore, our SpeechFormer achieves comparable results to the state-of-the-art approaches.

* 5 pages, 4figures. This paper was submitted to Insterspeech 2022 
Viaarxiv icon

Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values

Oct 09, 2021
Zhenquan Lin, Kailing Guo, Xiaofen Xing, Xiangmin Xu

Figure 1 for Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Figure 2 for Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Figure 3 for Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values
Figure 4 for Weight Evolution: Improving Deep Neural Networks Training through Evolving Inferior Weight Values

To obtain good performance, convolutional neural networks are usually over-parameterized. This phenomenon has stimulated two interesting topics: pruning the unimportant weights for compression and reactivating the unimportant weights to make full use of network capability. However, current weight reactivation methods usually reactivate the entire filters, which may not be precise enough. Looking back in history, the prosperity of filter pruning is mainly due to its friendliness to hardware implementation, but pruning at a finer structure level, i.e., weight elements, usually leads to better network performance. We study the problem of weight element reactivation in this paper. Motivated by evolution, we select the unimportant filters and update their unimportant elements by combining them with the important elements of important filters, just like gene crossover to produce better offspring, and the proposed method is called weight evolution (WE). WE is mainly composed of four strategies. We propose a global selection strategy and a local selection strategy and combine them to locate the unimportant filters. A forward matching strategy is proposed to find the matched important filters and a crossover strategy is proposed to utilize the important elements of the important filters for updating unimportant filters. WE is plug-in to existing network architectures. Comprehensive experiments show that WE outperforms the other reactivation methods and plug-in training methods with typical convolutional neural networks, especially lightweight networks. Our code is available at https://github.com/BZQLin/Weight-evolution.

* This paper is accepted by ACM Multimedia 2021 
Viaarxiv icon