Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"speech": models, code, and papers

Understanding and Detecting Dangerous Speech in Social Media

May 04, 2020
Ali Alshehri, El Moatez Billah Nagoudi, Muhammad Abdul-Mageed

Social media communication has become a significant part of daily activity in modern societies. For this reason, ensuring safety in social media platforms is a necessity. Use of dangerous language such as physical threats in online environments is a somewhat rare, yet remains highly important. Although several works have been performed on the related issue of detecting offensive and hateful language, dangerous speech has not previously been treated in any significant way. Motivated by these observations, we report our efforts to build a labeled dataset for dangerous speech. We also exploit our dataset to develop highly effective models to detect dangerous content. Our best model performs at 59.60% macro F1, significantly outperforming a competitive baseline.

* 9 pages 

  Access Paper or Ask Questions

ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder

Aug 26, 2018
Hirokazu Kameoka, Takuhiro Kaneko, Kou Tanaka, Nobukatsu Hojo

This paper proposes a non-parallel many-to-many voice conversion (VC) method using a variant of the conditional variational autoencoder (VAE) called an auxiliary classifier VAE (ACVAE). The proposed method has three key features. First, it adopts fully convolutional architectures to construct the encoder and decoder networks so that the networks can learn conversion rules that capture time dependencies in the acoustic feature sequences of source and target speech. Second, it uses an information-theoretic regularization for the model training to ensure that the information in the attribute class label will not be lost in the conversion process. With regular CVAEs, the encoder and decoder are free to ignore the attribute class label input. This can be problematic since in such a situation, the attribute class label will have little effect on controlling the voice characteristics of input speech at test time. Such situations can be avoided by introducing an auxiliary classifier and training the encoder and decoder so that the attribute classes of the decoder outputs are correctly predicted by the classifier. Third, it avoids producing buzzy-sounding speech at test time by simply transplanting the spectral details of the input speech into its converted version. Subjective evaluation experiments revealed that this simple method worked reasonably well in a non-parallel many-to-many speaker identity conversion task.

* arXiv admin note: substantial text overlap with arXiv:1806.02169. arXiv admin note: substantial text overlap with arXiv:1806.02169 

  Access Paper or Ask Questions

Turkish Presidential Elections TRT Publicity Speech Facial Expression Analysis

Aug 15, 2014
H. Emrah Tasli, Paul Ivan

In this paper, facial expressions of the three Turkish presidential candidates Demirtas, Erdogan and Ihsanoglu (in alphabetical order) are analyzed during the publicity speeches featured at TRT (Turkish Radio and Television) on 03.08.2014. FaceReader is used for the analysis where 3D modeling of the face is achieved using the active appearance models (AAM). Over 500 landmark points are tracked and analyzed for obtaining the facial expressions during the whole speech. All source videos and the data are publicly available for research purposes.

* 2 pages 3 figures 

  Access Paper or Ask Questions

A Survey of Online Hate Speech through the Causal Lens

Sep 16, 2021
Antigoni-Maria Founta, Lucia Specia

The societal issue of digital hostility has previously attracted a lot of attention. The topic counts an ample body of literature, yet remains prominent and challenging as ever due to its subjective nature. We posit that a better understanding of this problem will require the use of causal inference frameworks. This survey summarises the relevant research that revolves around estimations of causal effects related to online hate speech. Initially, we provide an argumentation as to why re-establishing the exploration of hate speech in causal terms is of the essence. Following that, we give an overview of the leading studies classified with respect to the direction of their outcomes, as well as an outline of all related research, and a summary of open research problems that can influence future work on the topic.

* Accepted to CI+NLP: First Workshop on Causal Inference and NLP, part of EMNLP 2021 

  Access Paper or Ask Questions

Neural Speech Synthesis on a Shoestring: Improving the Efficiency of LPCNet

Feb 22, 2022
Jean-Marc Valin, Umut Isik, Paris Smaragdis, Arvindh Krishnaswamy

Neural speech synthesis models can synthesize high quality speech but typically require a high computational complexity to do so. In previous work, we introduced LPCNet, which uses linear prediction to significantly reduce the complexity of neural synthesis. In this work, we further improve the efficiency of LPCNet -- targeting both algorithmic and computational improvements -- to make it usable on a wide variety of devices. We demonstrate an improvement in synthesis quality while operating 2.5x faster. The resulting open-source LPCNet algorithm can perform real-time neural synthesis on most existing phones and is even usable in some embedded devices.

* Accepted for ICASSP 2022, 5 pages 

  Access Paper or Ask Questions

Speaker Normalization for Self-supervised Speech Emotion Recognition

Feb 02, 2022
Itai Gat, Hagai Aronowitz, Weizhong Zhu, Edmilson Morais, Ron Hoory

Large speech emotion recognition datasets are hard to obtain, and small datasets may contain biases. Deep-net-based classifiers, in turn, are prone to exploit those biases and find shortcuts such as speaker characteristics. These shortcuts usually harm a model's ability to generalize. To address this challenge, we propose a gradient-based adversary learning framework that learns a speech emotion recognition task while normalizing speaker characteristics from the feature representation. We demonstrate the efficacy of our method on both speaker-independent and speaker-dependent settings and obtain new state-of-the-art results on the challenging IEMOCAP dataset.


  Access Paper or Ask Questions

Building a Test Collection for Speech-Driven Web Retrieval

Sep 12, 2003
Atsushi Fujii, Katunobu Itou

This paper describes a test collection (benchmark data) for retrieval systems driven by spoken queries. This collection was produced in the subtask of the NTCIR-3 Web retrieval task, which was performed in a TREC-style evaluation workshop. The search topics and document collection for the Web retrieval task were used to produce spoken queries and language models for speech recognition, respectively. We used this collection to evaluate the performance of our retrieval system. Experimental results showed that (a) the use of target documents for language modeling and (b) enhancement of the vocabulary size in speech recognition were effective in improving the system performance.

* Proceedings of the 8th European Conference on Speech Communication and Technology (Eurospeech 2003), pp.1153-1156, Sep. 2003 

  Access Paper or Ask Questions

On the Use of External Data for Spoken Named Entity Recognition

Dec 14, 2021
Ankita Pasad, Felix Wu, Suwon Shon, Karen Livescu, Kyu J. Han

Spoken language understanding (SLU) tasks involve mapping from speech audio signals to semantic labels. Given the complexity of such tasks, good performance might be expected to require large labeled datasets, which are difficult to collect for each new task and domain. However, recent advances in self-supervised speech representations have made it feasible to consider learning SLU models with limited labeled data. In this work we focus on low-resource spoken named entity recognition (NER) and address the question: Beyond self-supervised pre-training, how can we use external speech and/or text data that are not annotated for the task? We draw on a variety of approaches, including self-training, knowledge distillation, and transfer learning, and consider their applicability to both end-to-end models and pipeline (speech recognition followed by text NER model) approaches. We find that several of these approaches improve performance in resource-constrained settings beyond the benefits from pre-trained representations alone. Compared to prior work, we find improved F1 scores of up to 16%. While the best baseline model is a pipeline approach, the best performance when using external data is ultimately achieved by an end-to-end model. We provide detailed comparisons and analyses, showing for example that end-to-end models are able to focus on the more NER-specific words.


  Access Paper or Ask Questions

Impact of Naturalistic Field Acoustic Environments on Forensic Text-independent Speaker Verification System

Jan 28, 2022
Zhenyu Wang, John H. L. Hansen

Audio analysis for forensic speaker verification offers unique challenges in system performance due in part to data collected in naturalistic field acoustic environments where location/scenario uncertainty is common in the forensic data collection process. Forensic speech data as potential evidence can be obtained in random naturalistic environments resulting in variable data quality. Speech samples may include variability due to vocal efforts such as yelling over 911 emergency calls, whereas others might be whisper or situational stressed voice in a field location or interview room. Such speech variability consists of intrinsic and extrinsic characteristics and makes forensic speaker verification a complicated and daunting task. Extrinsic properties include recording equipment such as microphone type and placement, ambient noise, room configuration including reverberation, and other environmental scenario-based issues. Some factors, such as noise and non-target speech, will impact the verification system performance by their mere presence. To investigate the impact of field acoustic environments, we performed a speaker verification study based on the CRSS-Forensic corpus with audio collected from 8 field locations including police interviews. This investigation includes an analysis of the impact of seven unseen acoustic environments on speaker verification system performance using an x-Vector system.

* IAFPA-2021-International Association for Forensic Phonetics and Acoustics 

  Access Paper or Ask Questions

CUHK-EE Voice Cloning System for ICASSP 2021 M2VoC Challenge

Apr 03, 2021
Daxin Tan, Hingpang Huang, Guangyan Zhang, Tan Lee

This paper presents the CUHK-EE voice cloning system for ICASSP 2021 M2VoC challenge. The challenge provides two Mandarin speech corpora: the AIShell-3 corpus of 218 speakers with noise and reverberation and the MST corpus including high-quality speech of one male and one female speakers. 100 and 5 utterances of 3 target speakers in different voice and style are provided in track 1 and 2 respectively, and the participants are required to synthesize speech in target speaker's voice and style. We take part in the track 1 and carry out voice cloning based on 100 utterances of target speakers. An end-to-end voicing cloning system is developed to accomplish the task, which includes: 1. a text and speech front-end module with the help of forced alignment, 2. an acoustic model combining Tacotron2 and DurIAN to predict melspectrogram, 3. a Hifigan vocoder for waveform generation. Our system comprises three stages: multi-speaker training stage, target speaker adaption stage and target speaker synthesis stage. Our team is identified as T17. The subjective evaluation results provided by the challenge organizer demonstrate the effectiveness of our system. Audio samples are available at our demo page: https://daxintan-cuhk.github.io/CUHK-EE-system-M2VoC-challenge/ .


  Access Paper or Ask Questions

<<
262
263
264
265
266
267
268
269
270
271
272
273
274
>>