Alert button
Picture for Mingxing Xu

Mingxing Xu

Alert button

Enhancing Quantised End-to-End ASR Models via Personalisation

Sep 17, 2023
Qiuming Zhao, Guangzhi Sun, Chao Zhang, Mingxing Xu, Thomas Fang Zheng

Figure 1 for Enhancing Quantised End-to-End ASR Models via Personalisation
Figure 2 for Enhancing Quantised End-to-End ASR Models via Personalisation
Figure 3 for Enhancing Quantised End-to-End ASR Models via Personalisation
Figure 4 for Enhancing Quantised End-to-End ASR Models via Personalisation

Recent end-to-end automatic speech recognition (ASR) models have become increasingly larger, making them particularly challenging to be deployed on resource-constrained devices. Model quantisation is an effective solution that sometimes causes the word error rate (WER) to increase. In this paper, a novel strategy of personalisation for a quantised model (PQM) is proposed, which combines speaker adaptive training (SAT) with model quantisation to improve the performance of heavily compressed models. Specifically, PQM uses a 4-bit NormalFloat Quantisation (NF4) approach for model quantisation and low-rank adaptation (LoRA) for SAT. Experiments have been performed on the LibriSpeech and the TED-LIUM 3 corpora. Remarkably, with a 7x reduction in model size and 1% additional speaker-specific parameters, 15.1% and 23.3% relative WER reductions were achieved on quantised Whisper and Conformer-based attention-based encoder-decoder ASR models respectively, comparing to the original full precision models.

* 5 pages, submitted to ICASSP 2024 
Viaarxiv icon

Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for Pooling and Unpooling

May 31, 2022
Mingxing Xu, Chenglin Li, Wenrui Dai, Siheng Chen, Junni Zou, Pascal Frossard, Hongkai Xiong

Figure 1 for Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for Pooling and Unpooling
Figure 2 for Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for Pooling and Unpooling
Figure 3 for Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for Pooling and Unpooling
Figure 4 for Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for Pooling and Unpooling

Pooling and unpooling are two essential operations in constructing hierarchical spherical convolutional neural networks (HS-CNNs) for comprehensive feature learning in the spherical domain. Most existing models employ downsampling-based pooling, which will inevitably incur information loss and cannot adapt to different spherical signals and tasks. Besides, the preserved information after pooling cannot be well restored by the subsequent unpooling to characterize the desirable features for a task. In this paper, we propose a novel framework of HS-CNNs with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling, dubbed LiftHS-CNN, which ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks. Specifically, adaptive spherical wavelets are learned with a lifting structure that consists of trainable lifting operators (i.e., update and predict operators). With this learnable lifting structure, we can adaptively partition a signal into two sub-bands containing low- and high-frequency components, respectively, and thus generate a better down-scaled representation for pooling by preserving more information in the low-frequency sub-band. The update and predict operators are parameterized with graph-based attention to jointly consider the signal's characteristics and the underlying geometries. We further show that particular properties are promised by the learned wavelets, ensuring the spatial-frequency localization for better exploiting the signal's correlation in both spatial and frequency domains. We then propose an unpooling operation that is invertible to the lifting-based pooling, where an inverse wavelet transform is performed by using the learned lifting operators to restore an up-scaled representation. Extensive empirical evaluations on various spherical domain tasks validate the superiority of the proposed LiftHS-CNN.

Viaarxiv icon

LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning

Apr 27, 2022
Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong

Figure 1 for LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning
Figure 2 for LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning
Figure 3 for LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning
Figure 4 for LiftPool: Lifting-based Graph Pooling for Hierarchical Graph Representation Learning

Graph pooling has been increasingly considered for graph neural networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages, i.e., selecting the top-ranked nodes and removing the rest nodes to construct a coarsened graph representation. However, local structural information of the removed nodes would be inevitably dropped in these methods, due to the inherent coupling of nodes (location) and their features (signals). In this paper, we propose an enhanced three-stage method via lifting, named LiftPool, to improve hierarchical graph representation by maximally preserving the local structural information in graph pooling. LiftPool introduces an additional stage of graph lifting before graph coarsening to preserve the local information of the removed nodes and decouple the processes of node removing and feature reduction. Specifically, for each node to be removed, its local information is obtained by subtracting the global information aggregated from its neighboring preserved nodes. Subsequently, this local information is aligned and propagated to the preserved nodes to alleviate information loss in graph coarsening. Furthermore, we demonstrate that the proposed LiftPool is localized and permutation-invariant. The proposed graph lifting structure is general to be integrated with existing downsampling-based graph pooling methods. Evaluations on benchmark graph datasets show that LiftPool substantially outperforms the state-of-the-art graph pooling methods in the task of graph classification.

Viaarxiv icon

Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

Sep 03, 2021
Yingmei Guo, Linjun Shou, Jian Pei, Ming Gong, Mingxing Xu, Zhiyong Wu, Daxin Jiang

Figure 1 for Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding
Figure 2 for Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding
Figure 3 for Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding
Figure 4 for Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

Lack of training data presents a grand challenge to scaling out spoken language understanding (SLU) to low-resource languages. Although various data augmentation approaches have been proposed to synthesize training data in low-resource target languages, the augmented data sets are often noisy, and thus impede the performance of SLU models. In this paper we focus on mitigating noise in augmented data. We develop a denoising training approach. Multiple models are trained with data produced by various augmented methods. Those models provide supervision signals to each other. The experimental results show that our method outperforms the existing state of the art by 3.05 and 4.24 percentage points on two benchmark datasets, respectively. The code will be made open sourced on github.

* Long paper at EMNLP 2021 
Viaarxiv icon

Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph Wavelets

Aug 04, 2021
Mingxing Xu, Wenrui Dai, Chenglin Li, Junni Zou, Hongkai Xiong, Pascal Frossard

Figure 1 for Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph Wavelets
Figure 2 for Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph Wavelets
Figure 3 for Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph Wavelets
Figure 4 for Spectral Graph Convolutional Networks With Lifting-based Adaptive Graph Wavelets

Spectral graph convolutional networks (SGCNs) have been attracting increasing attention in graph representation learning partly due to their interpretability through the prism of the established graph signal processing framework. However, existing SGCNs are limited in implementing graph convolutions with rigid transforms that could not adapt to signals residing on graphs and tasks at hand. In this paper, we propose a novel class of spectral graph convolutional networks that implement graph convolutions with adaptive graph wavelets. Specifically, the adaptive graph wavelets are learned with neural network-parameterized lifting structures, where structure-aware attention-based lifting operations are developed to jointly consider graph structures and node features. We propose to lift based on diffusion wavelets to alleviate the structural information loss induced by partitioning non-bipartite graphs. By design, the locality and sparsity of the resulting wavelet transform as well as the scalability of the lifting structure for large and varying-size graphs are guaranteed. We further derive a soft-thresholding filtering operation by learning sparse graph representations in terms of the learned wavelets, which improves the scalability and interpretablity, and yield a localized, efficient and scalable spectral graph convolution. To ensure that the learned graph representations are invariant to node permutations, a layer is employed at the input of the networks to reorder the nodes according to their local topology information. We evaluate the proposed networks in both node-level and graph-level representation learning tasks on benchmark citation and bioinformatics graph datasets. Extensive experiments demonstrate the superiority of the proposed networks over existing SGCNs in terms of accuracy, efficiency and scalability.

Viaarxiv icon

Spatial-Temporal Transformer Networks for Traffic Flow Forecasting

Jan 09, 2020
Mingxing Xu, Wenrui Dai, Chunmiao Liu, Xing Gao, Weiyao Lin, Guo-Jun Qi, Hongkai Xiong

Figure 1 for Spatial-Temporal Transformer Networks for Traffic Flow Forecasting
Figure 2 for Spatial-Temporal Transformer Networks for Traffic Flow Forecasting
Figure 3 for Spatial-Temporal Transformer Networks for Traffic Flow Forecasting
Figure 4 for Spatial-Temporal Transformer Networks for Traffic Flow Forecasting

Traffic forecasting has emerged as a core component of intelligent transportation systems. However, timely accurate traffic forecasting, especially long-term forecasting, still remains an open challenge due to the highly nonlinear and dynamic spatial-temporal dependencies of traffic flows. In this paper, we propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) that leverages dynamical directed spatial dependencies and long-range temporal dependencies to improve the accuracy of long-term traffic forecasting. Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies with self-attention mechanism to capture realtime traffic conditions as well as the directionality of traffic flows. Furthermore, different spatial dependency patterns can be jointly modeled with multi-heads attention mechanism to consider diverse relationships related to different factors (e.g. similarity, connectivity and covariance). On the other hand, the temporal transformer is utilized to model long-range bidirectional temporal dependencies across multiple time steps. Finally, they are composed as a block to jointly model the spatial-temporal dependencies for accurate traffic prediction. Compared to existing works, the proposed model enables fast and scalable training over a long range spatial-temporal dependencies. Experiment results demonstrate that the proposed model achieves competitive results compared with the state-of-the-arts, especially forecasting long-term traffic flows on real-world PeMS-Bay and PeMSD7(M) datasets.

Viaarxiv icon

Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition

Nov 17, 2016
Xi Ma, Zhiyong Wu, Jia Jia, Mingxing Xu, Helen Meng, Lianhong Cai

Figure 1 for Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition
Figure 2 for Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition
Figure 3 for Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition
Figure 4 for Study on Feature Subspace of Archetypal Emotions for Speech Emotion Recognition

Feature subspace selection is an important part in speech emotion recognition. Most of the studies are devoted to finding a feature subspace for representing all emotions. However, some studies have indicated that the features associated with different emotions are not exactly the same. Hence, traditional methods may fail to distinguish some of the emotions with just one global feature subspace. In this work, we propose a new divide and conquer idea to solve the problem. First, the feature subspaces are constructed for all the combinations of every two different emotions (emotion-pair). Bi-classifiers are then trained on these feature subspaces respectively. The final emotion recognition result is derived by the voting and competition method. Experimental results demonstrate that the proposed method can get better results than the traditional multi-classification method.

* 5 pages, 4 figures, ICASSP-2017 
Viaarxiv icon