Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"speech": models, code, and papers

Speech Emotion Recognition using Semantic Information

Mar 04, 2021
Panagiotis Tzirakis, Anh Nguyen, Stefanos Zafeiriou, Björn W. Schuller

Speech emotion recognition is a crucial problem manifesting in a multitude of applications such as human computer interaction and education. Although several advancements have been made in the recent years, especially with the advent of Deep Neural Networks (DNN), most of the studies in the literature fail to consider the semantic information in the speech signal. In this paper, we propose a novel framework that can capture both the semantic and the paralinguistic information in the signal. In particular, our framework is comprised of a semantic feature extractor, that captures the semantic information, and a paralinguistic feature extractor, that captures the paralinguistic information. Both semantic and paraliguistic features are then combined to a unified representation using a novel attention mechanism. The unified feature vector is passed through a LSTM to capture the temporal dynamics in the signal, before the final prediction. To validate the effectiveness of our framework, we use the popular SEWA dataset of the AVEC challenge series and compare with the three winning papers. Our model provides state-of-the-art results in the valence and liking dimensions.

* ICASSP 2021 

  Access Paper or Ask Questions

Neural Architecture Search for Speech Recognition

Jul 27, 2020
Shoukang Hu, Xurong Xie, Shansong Liu, Mengzhe Geng, Xunying Liu, Helen Meng

Deep neural networks (DNNs) based automatic speech recognition (ASR) systems are often designed using expert knowledge and empirical evaluation. In this paper, a range of neural architecture search (NAS) techniques are used to automatically learn two hyper-parameters that heavily affect the performance and model complexity of state-of-the-art factored time delay neural network (TDNN-F) acoustic models: i) the left and right splicing context offsets; and ii) the dimensionality of the bottleneck linear projection at each hidden layer. These include the standard DARTS method fully integrating the estimation of architecture weights and TDNN parameters in lattice-free MMI (LF-MMI) training; Gumbel-Softmax DARTS that reduces the confusion between candidate architectures; Pipelined DARTS that circumvents the overfitting of architecture weights using held-out data; and Penalized DARTS that further incorporates resource constraints to adjust the trade-off between performance and system complexity. Parameter sharing among candidate architectures was also used to facilitate efficient search over up to $7^{28}$ different TDNN systems. Experiments conducted on a 300-hour Switchboard conversational telephone speech recognition task suggest the NAS auto-configured TDNN-F systems consistently outperform the baseline LF-MMI trained TDNN-F systems using manual expert configurations. Absolute word error rate reductions up to 1.0% and relative model size reduction of 28% were obtained.

* One of the authors disagrees to put the paper on the arxiv since the paper is not published. So now I would like to apply a formal withdraw of the paper. Hope you can understand our concerns 

  Access Paper or Ask Questions

Incorporating Wireless Communication Parameters into the E-Model Algorithm

Mar 05, 2021
Demóstenes Z. Rodríguez, Dick Carrillo Melgarejo, Miguel A. Ramírez, Pedro H. J. Nardelli, Sebastian Möller

Telecommunication service providers have to guarantee acceptable speech quality during a phone call to avoid a negative impact on the users' quality of experience. Currently, there are different speech quality assessment methods. ITU-T Recommendation G.107 describes the E-model algorithm, which is a computational model developed for network planning purposes focused on narrowband (NB) networks. Later, ITU-T Recommendations G.107.1 and G.107.2 were developed for wideband (WB) and fullband (FB) networks. These algorithms use different impairment factors, each one related to different speech communication steps. However, the NB, WB, and FB E-model algorithms do not consider wireless techniques used in these networks, such as Multiple-Input-Multiple-Output (MIMO) systems, which are used to improve the communication system robustness in the presence of different types of wireless channel degradation. In this context, the main objective of this study is to propose a general methodology to incorporate wireless network parameters into the NB and WB E-model algorithms. To accomplish this goal, MIMO and wireless channel parameters are incorporated into the E-model algorithms, specifically into the $I_{e,eff}$ and $I_{e,eff,WB}$ impairment factors. For performance validation, subjective tests were carried out, and the proposed methodology reached a Pearson correlation coefficient (PCC) and a root mean square error (RMSE) of $0.9732$ and $0.2351$, respectively. It is noteworthy that our proposed methodology does not affect the rest of the E-model input parameters, and it intends to be useful for wireless network planning in speech communication services.

* IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH, AND LANGUAGE PROCESSING, VOL. 29, 2021 
* 18 pages 

  Access Paper or Ask Questions

Identity-Preserving Realistic Talking Face Generation

May 25, 2020
Sanjana Sinha, Sandika Biswas, Brojeshwar Bhowmick

Speech-driven facial animation is useful for a variety of applications such as telepresence, chatbots, etc. The necessary attributes of having a realistic face animation are 1) audio-visual synchronization (2) identity preservation of the target individual (3) plausible mouth movements (4) presence of natural eye blinks. The existing methods mostly address the audio-visual lip synchronization, and few recent works have addressed the synthesis of natural eye blinks for overall video realism. In this paper, we propose a method for identity-preserving realistic facial animation from speech. We first generate person-independent facial landmarks from audio using DeepSpeech features for invariance to different voices, accents, etc. To add realism, we impose eye blinks on facial landmarks using unsupervised learning and retargets the person-independent landmarks to person-specific landmarks to preserve the identity-related facial structure which helps in the generation of plausible mouth shapes of the target identity. Finally, we use LSGAN to generate the facial texture from person-specific facial landmarks, using an attention mechanism that helps to preserve identity-related texture. An extensive comparison of our proposed method with the current state-of-the-art methods demonstrates a significant improvement in terms of lip synchronization accuracy, image reconstruction quality, sharpness, and identity-preservation. A user study also reveals improved realism of our animation results over the state-of-the-art methods. To the best of our knowledge, this is the first work in speech-driven 2D facial animation that simultaneously addresses all the above-mentioned attributes of a realistic speech-driven face animation.

* Accepted in IJCNN 2020 

  Access Paper or Ask Questions

Modelling the Lexicon in Unsupervised Part of Speech Induction

Feb 26, 2014
Greg Dubbin, Phil Blunsom

Automatically inducing the syntactic part-of-speech categories for words in text is a fundamental task in Computational Linguistics. While the performance of unsupervised tagging models has been slowly improving, current state-of-the-art systems make the obviously incorrect assumption that all tokens of a given word type must share a single part-of-speech tag. This one-tag-per-type heuristic counters the tendency of Hidden Markov Model based taggers to over generate tags for a given word type. However, it is clearly incompatible with basic syntactic theory. In this paper we extend a state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model of the lexicon. In doing so we are able to incorporate a soft bias towards inducing few tags per type. We develop a particle filter for drawing samples from the posterior of our model and present empirical results that show that our model is competitive with and faster than the state-of-the-art without making any unrealistic restrictions.

* To be presented at the 14th Conference of the European Chapter of the Association for Computational Linguistics 

  Access Paper or Ask Questions

Efficient End-to-End Speech Recognition Using Performers in Conformers

Nov 11, 2020
Peidong Wang, DeLiang Wang

On-device end-to-end speech recognition poses a high requirement on model efficiency. Most prior works improve the efficiency by reducing model sizes. We propose to reduce the complexity of model architectures in addition to model sizes. More specifically, we reduce the floating-point operations in conformer by replacing the transformer module with a performer. The proposed attention-based efficient end-to-end speech recognition model yields competitive performance on the LibriSpeech corpus with 10 millions of parameters and linear computation complexity. The proposed model also outperforms previous lightweight end-to-end models by about 20% relatively in word error rate.

* The current submission has not been reviewed by the coauthor 

  Access Paper or Ask Questions

End-to-End Optimized Speech Coding with Deep Neural Networks

Oct 25, 2017
Srihari Kankanahalli

Modern compression algorithms are often the result of laborious domain-specific research; industry standards such as MP3, JPEG, and AMR-WB took years to develop and were largely hand-designed. We present a deep neural network model which optimizes all the steps of a wideband speech coding pipeline (compression, quantization, entropy coding, and decompression) end-to-end directly from raw speech data -- no manual feature engineering necessary, and it trains in hours. In testing, our DNN-based coder performs on par with the AMR-WB standard at a variety of bitrates (~9kbps up to ~24kbps). It also runs in realtime on a 3.8GhZ Intel CPU.

* For submission to ICASSP 2018. Samples available here: http://srik.tk/speech-coding/ 

  Access Paper or Ask Questions

Exploring Neural Transducers for End-to-End Speech Recognition

Jul 24, 2017
Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu

In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue - RNNTransducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models - when all encoder layers are forward only, and when encoders downsample the input representation aggressively.


  Access Paper or Ask Questions

ASR Context-Sensitive Error Correction Based on Microsoft N-Gram Dataset

Mar 23, 2012
Youssef Bassil, Paul Semaan

At the present time, computers are employed to solve complex tasks and problems ranging from simple calculations to intensive digital image processing and intricate algorithmic optimization problems to computationally-demanding weather forecasting problems. ASR short for Automatic Speech Recognition is yet another type of computational problem whose purpose is to recognize human spoken speech and convert it into text that can be processed by a computer. Despite that ASR has many versatile and pervasive real-world applications,it is still relatively erroneous and not perfectly solved as it is prone to produce spelling errors in the recognized text, especially if the ASR system is operating in a noisy environment, its vocabulary size is limited, and its input speech is of bad or low quality. This paper proposes a post-editing ASR error correction method based on MicrosoftN-Gram dataset for detecting and correcting spelling errors generated by ASR systems. The proposed method comprises an error detection algorithm for detecting word errors; a candidate corrections generation algorithm for generating correction suggestions for the detected word errors; and a context-sensitive error correction algorithm for selecting the best candidate for correction. The virtue of using the Microsoft N-Gram dataset is that it contains real-world data and word sequences extracted from the web which canmimica comprehensive dictionary of words having a large and all-inclusive vocabulary. Experiments conducted on numerous speeches, performed by different speakers, showed a remarkable reduction in ASR errors. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor and distributed systems.

* Journal of Computing, Vol.4, No.1, January 2012 
* LACSC - Lebanese Association for Computational Sciences - http://www.lacsc.org 

  Access Paper or Ask Questions

<<
270
271
272
273
274
275
276
277
278
279
280
281
282
>>