Alert button
Picture for Irina Illina

Irina Illina

Alert button

LORIA

SAMbA: Speech enhancement with Asynchronous ad-hoc Microphone Arrays

Jul 31, 2023
Nicolas Furnon, Romain Serizel, Slim Essid, Irina Illina

Figure 1 for SAMbA: Speech enhancement with Asynchronous ad-hoc Microphone Arrays
Figure 2 for SAMbA: Speech enhancement with Asynchronous ad-hoc Microphone Arrays
Figure 3 for SAMbA: Speech enhancement with Asynchronous ad-hoc Microphone Arrays
Figure 4 for SAMbA: Speech enhancement with Asynchronous ad-hoc Microphone Arrays

Speech enhancement in ad-hoc microphone arrays is often hindered by the asynchronization of the devices composing the microphone array. Asynchronization comes from sampling time offset and sampling rate offset which inevitably occur when the microphones are embedded in different hardware components. In this paper, we propose a deep neural network (DNN)-based speech enhancement solution that is suited for applications in ad-hoc microphone arrays because it is distributed and copes with asynchronization. We show that asynchronization has a limited impact on the spatial filtering and mostly affects the performance of the DNNs. Instead of resynchronising the signals, which requires costly processing steps, we use an attention mechanism which makes the DNNs, thus our whole pipeline, robust to asynchronization. We also show that the attention mechanism leads to the asynchronization parameters in an unsupervised manner.

* Submitted to INTERSPEECH 2022 
Viaarxiv icon

Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection

Oct 17, 2022
Tulika Bose, Irina Illina, Dominique Fohr

Figure 1 for Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection
Figure 2 for Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection
Figure 3 for Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection
Figure 4 for Transferring Knowledge via Neighborhood-Aware Optimal Transport for Low-Resource Hate Speech Detection

The concerning rise of hateful content on online platforms has increased the attention towards automatic hate speech detection, commonly formulated as a supervised classification task. State-of-the-art deep learning-based approaches usually require a substantial amount of labeled resources for training. However, annotating hate speech resources is expensive, time-consuming, and often harmful to the annotators. This creates a pressing need to transfer knowledge from the existing labeled resources to low-resource hate speech corpora with the goal of improving system performance. For this, neighborhood-based frameworks have been shown to be effective. However, they have limited flexibility. In our paper, we propose a novel training strategy that allows flexible modeling of the relative proximity of neighbors retrieved from a resource-rich corpus to learn the amount of transfer. In particular, we incorporate neighborhood information with Optimal Transport, which permits exploiting the geometry of the data embedding space. By aligning the joint embedding and label distributions of neighbors, we demonstrate substantial improvements over strong baselines, in low-resource scenarios, on different publicly available hate speech corpora.

* AACL-IJCNLP 2022 preprint 
Viaarxiv icon

Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection

Sep 18, 2022
Tulika Bose, Nikolaos Aletras, Irina Illina, Dominique Fohr

Figure 1 for Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Figure 2 for Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Figure 3 for Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection
Figure 4 for Domain Classification-based Source-specific Term Penalization for Domain Adaptation in Hate-speech Detection

State-of-the-art approaches for hate-speech detection usually exhibit poor performance in out-of-domain settings. This occurs, typically, due to classifiers overemphasizing source-specific information that negatively impacts its domain invariance. Prior work has attempted to penalize terms related to hate-speech from manually curated lists using feature attribution methods, which quantify the importance assigned to input terms by the classifier when making a prediction. We, instead, propose a domain adaptation approach that automatically extracts and penalizes source-specific terms using a domain classifier, which learns to differentiate between domains, and feature-attribution scores for hate-speech classes, yielding consistent improvements in cross-domain evaluation.

* COLING 2022 pre-print 
Viaarxiv icon

Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online

Apr 28, 2022
Dana Ruiter, Liane Reiners, Ashwin Geet D'Sa, Thomas Kleinbauer, Dominique Fohr, Irina Illina, Dietrich Klakow, Christian Schemer, Angeliki Monnier

Figure 1 for Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online
Figure 2 for Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online
Figure 3 for Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online
Figure 4 for Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online

Even though hate speech (HS) online has been an important object of research in the last decade, most HS-related corpora over-simplify the phenomenon of hate by attempting to label user comments as "hate" or "neutral". This ignores the complex and subjective nature of HS, which limits the real-life applicability of classifiers trained on these corpora. In this study, we present the M-Phasis corpus, a corpus of ~9k German and French user comments collected from migration-related news articles. It goes beyond the "hate"-"neutral" dichotomy and is instead annotated with 23 features, which in combination become descriptors of various types of speech, ranging from critical comments to implicit and explicit expressions of hate. The annotations are performed by 4 native speakers per language and achieve high (0.77 <= k <= 1) inter-annotator agreements. Besides describing the corpus creation and presenting insights from a content, error and domain analysis, we explore its data characteristics by training several classification baselines.

* 14 pages, 4 figures, accepted at LREC 2022 (Full Paper) 
Viaarxiv icon

Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection

Mar 23, 2022
Tulika Bose, Nikolaos Aletras, Irina Illina, Dominique Fohr

Figure 1 for Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Figure 2 for Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Figure 3 for Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection
Figure 4 for Dynamically Refined Regularization for Improving Cross-corpora Hate Speech Detection

Hate speech classifiers exhibit substantial performance degradation when evaluated on datasets different from the source. This is due to learning spurious correlations between words that are not necessarily relevant to hateful language, and hate speech labels from the training corpus. Previous work has attempted to mitigate this problem by regularizing specific terms from pre-defined static dictionaries. While this has been demonstrated to improve the generalizability of classifiers, the coverage of such methods is limited and the dictionaries require regular manual updates from human experts. In this paper, we propose to automatically identify and reduce spurious correlations using attribution methods with dynamic refinement of the list of terms that need to be regularized during training. Our approach is flexible and improves the cross-corpora performance over previous work independently and in combination with pre-defined dictionaries.

* Findings of ACL 2022 preprint 
Viaarxiv icon

Attention-based distributed speech enhancement for unconstrained microphone arrays with varying number of nodes

Jun 15, 2021
Nicolas Furnon, Romain Serizel, Slim Essid, Irina Illina

Figure 1 for Attention-based distributed speech enhancement for unconstrained microphone arrays with varying number of nodes
Figure 2 for Attention-based distributed speech enhancement for unconstrained microphone arrays with varying number of nodes
Figure 3 for Attention-based distributed speech enhancement for unconstrained microphone arrays with varying number of nodes
Figure 4 for Attention-based distributed speech enhancement for unconstrained microphone arrays with varying number of nodes

Speech enhancement promises higher efficiency in ad-hoc microphone arrays than in constrained microphone arrays thanks to the wide spatial coverage of the devices in the acoustic scene. However, speech enhancement in ad-hoc microphone arrays still raises many challenges. In particular, the algorithms should be able to handle a variable number of microphones, as some devices in the array might appear or disappear. In this paper, we propose a solution that can efficiently process the spatial information captured by the different devices of the microphone array, while being robust to a link failure. To do this, we use an attention mechanism in order to put more weight on the relevant signals sent throughout the array and to neglect the redundant or empty channels.

* European Signal Processing Conference (EUSIPCO), IEEE, Aug 2021, Dublin, Ireland  
Viaarxiv icon

Improving Automatic Hate Speech Detection with Multiword Expression Features

Jun 01, 2021
Nicolas Zampieri, Irina Illina, Dominique Fohr

Figure 1 for Improving Automatic Hate Speech Detection with Multiword Expression Features
Figure 2 for Improving Automatic Hate Speech Detection with Multiword Expression Features
Figure 3 for Improving Automatic Hate Speech Detection with Multiword Expression Features
Figure 4 for Improving Automatic Hate Speech Detection with Multiword Expression Features

The task of automatically detecting hate speech in social media is gaining more and more attention. Given the enormous volume of content posted daily, human monitoring of hate speech is unfeasible. In this work, we propose new word-level features for automatic hate speech detection (HSD): multiword expressions (MWEs). MWEs are lexical units greater than a word that have idiomatic and compositional meanings. We propose to integrate MWE features in a deep neural network-based HSD framework. Our baseline HSD system relies on Universal Sentence Encoder (USE). To incorporate MWE features, we create a three-branch deep neural network: one branch for USE, one for MWE categories, and one for MWE embeddings. We conduct experiments on two hate speech tweet corpora with different MWE categories and with two types of MWE embeddings, word2vec and BERT. Our experiments demonstrate that the proposed HSD system with MWE features significantly outperforms the baseline system in terms of macro-F1.

* In Proceedings of NLDB 2021 
Viaarxiv icon

DNN-Based Semantic Model for Rescoring N-best Speech Recognition List

Nov 02, 2020
Dominique Fohr, Irina Illina

Figure 1 for DNN-Based Semantic Model for Rescoring N-best Speech Recognition List
Figure 2 for DNN-Based Semantic Model for Rescoring N-best Speech Recognition List
Figure 3 for DNN-Based Semantic Model for Rescoring N-best Speech Recognition List
Figure 4 for DNN-Based Semantic Model for Rescoring N-best Speech Recognition List

The word error rate (WER) of an automatic speech recognition (ASR) system increases when a mismatch occurs between the training and the testing conditions due to the noise, etc. In this case, the acoustic information can be less reliable. This work aims to improve ASR by modeling long-term semantic relations to compensate for distorted acoustic features. We propose to perform this through rescoring of the ASR N-best hypotheses list. To achieve this, we train a deep neural network (DNN). Our DNN rescoring model is aimed at selecting hypotheses that have better semantic consistency and therefore lower WER. We investigate two types of representations as part of input features to our DNN model: static word embeddings (from word2vec) and dynamic contextual embeddings (from BERT). Acoustic and linguistic features are also included. We perform experiments on the publicly available dataset TED-LIUM mixed with real noise. The proposed rescoring approaches give significant improvement of the WER over the ASR system without rescoring models in two noisy conditions and with n-gram and RNNLM.

Viaarxiv icon

DNN-Based Distributed Multichannel Mask Estimation for Speech Enhancement in Microphone Arrays

Mar 16, 2020
Nicolas Furnon, Romain Serizel, Irina Illina, Slim Essid

Figure 1 for DNN-Based Distributed Multichannel Mask Estimation for Speech Enhancement in Microphone Arrays
Figure 2 for DNN-Based Distributed Multichannel Mask Estimation for Speech Enhancement in Microphone Arrays
Figure 3 for DNN-Based Distributed Multichannel Mask Estimation for Speech Enhancement in Microphone Arrays
Figure 4 for DNN-Based Distributed Multichannel Mask Estimation for Speech Enhancement in Microphone Arrays

Multichannel processing is widely used for speech enhancement but several limitations appear when trying to deploy these solutions to the real-world. Distributed sensor arrays that consider several devices with a few microphones is a viable alternative that allows for exploiting the multiple devices equipped with microphones that we are using in our everyday life. In this context, we propose to extend the distributed adaptive node-specific signal estimation approach to a neural networks framework. At each node, a local filtering is performed to send one signal to the other nodes where a mask is estimated by a neural network in order to compute a global multi-channel Wiener filter. In an array of two nodes, we show that this additional signal can be efficiently taken into account to predict the masks and leads to better speech enhancement performances than when the mask estimation relies only on the local signals.

* International Conference on Audio, Signal and Speech Processing (ICASSP), May 2020, Barcelone, Spain  
* Submitted to ICASSP2020 
Viaarxiv icon

Towards non-toxic landscapes: Automatic toxic comment detection using DNN

Nov 19, 2019
Ashwin Geet D'Sa, Irina Illina, Dominique Fohr

Figure 1 for Towards non-toxic landscapes: Automatic toxic comment detection using DNN
Figure 2 for Towards non-toxic landscapes: Automatic toxic comment detection using DNN
Figure 3 for Towards non-toxic landscapes: Automatic toxic comment detection using DNN
Figure 4 for Towards non-toxic landscapes: Automatic toxic comment detection using DNN

The spectacular expansion of the Internet led to the development of a new research problem in the natural language processing field: automatic toxic comment detection, since many countries prohibit hate speech in public media. There is no clear and formal definition of hate, offensive, toxic and abusive speeches. In this article, we put all these terms under the "umbrella" of toxic speech. The contribution of this paper is the design of binary classification and regression-based approaches aiming to predict whether a comment is toxic or not. We compare different unsupervised word representations and different DNN classifiers. Moreover, we study the robustness of the proposed approaches to adversarial attacks by adding one (healthy or toxic) word. We evaluate the proposed methodology on the English Wikipedia Detox corpus. Our experiments show that using BERT fine-tuning outperforms feature-based BERT, Mikolov's word embedding or fastText representations with different DNN classifiers.

Viaarxiv icon