Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"speech": models, code, and papers

Identifying Offensive Expressions of Opinion in Context

Apr 25, 2021
Francielle Alves Vargas, Isabelle Carvalho, Fabiana Rodrigues de Góes

Classic information extraction techniques consist in building questions and answers about the facts. Indeed, it is still a challenge to subjective information extraction systems to identify opinions and feelings in context. In sentiment-based NLP tasks, there are few resources to information extraction, above all offensive or hateful opinions in context. To fill this important gap, this short paper provides a new cross-lingual and contextual offensive lexicon, which consists of explicit and implicit offensive and swearing expressions of opinion, which were annotated in two different classes: context dependent and context-independent offensive. In addition, we provide markers to identify hate speech. Annotation approach was evaluated at the expression-level and achieves high human inter-annotator agreement. The provided offensive lexicon is available in Portuguese and English languages.


  Access Paper or Ask Questions

TypeShift: A User Interface for Visualizing the Typing Production Process

Mar 07, 2021
Adam Goodkind

TypeShift is a tool for visualizing linguistic patterns in the timing of typing production. Language production is a complex process which draws on linguistic, cognitive and motor skills. By visualizing holistic trends in the typing process, TypeShift aims to elucidate the often noisy information signals that are used to represent typing patterns, both at the word-level and character-level. It accomplishes this by enabling a researcher to compare and contrast specific linguistic phenomena, and compare an individual typing session to multiple group averages. Finally, although TypeShift was originally designed for typing data, it can easy be adapted to accommodate speech data, as well. A web demo is available at https://angoodkind.shinyapps.io/TypeShift/. The source code can be accessed at https://github.com/angoodkind/TypeShift.

* 7 pages, 14 figures 

  Access Paper or Ask Questions

Rissanen Data Analysis: Examining Dataset Characteristics via Description Length

Mar 05, 2021
Ethan Perez, Douwe Kiela, Kyunghyun Cho

We introduce a method to determine if a certain capability helps to achieve an accurate model of given data. We view labels as being generated from the inputs by a program composed of subroutines with different capabilities, and we posit that a subroutine is useful if and only if the minimal program that invokes it is shorter than the one that does not. Since minimum program length is uncomputable, we instead estimate the labels' minimum description length (MDL) as a proxy, giving us a theoretically-grounded method for analyzing dataset characteristics. We call the method Rissanen Data Analysis (RDA) after the father of MDL, and we showcase its applicability on a wide variety of settings in NLP, ranging from evaluating the utility of generating subquestions before answering a question, to analyzing the value of rationales and explanations, to investigating the importance of different parts of speech, and uncovering dataset gender bias.

* Code at https://github.com/ethanjperez/rda along with a script to run RDA on your own dataset 

  Access Paper or Ask Questions

Creating a Universal Dependencies Treebank of Spoken Frisian-Dutch Code-switched Data

Feb 22, 2021
Anouck Braggaar, Rob van der Goot

This paper explores the difficulties of annotating transcribed spoken Dutch-Frisian code-switch utterances into Universal Dependencies. We make use of data from the FAME! corpus, which consists of transcriptions and audio data. Besides the usual annotation difficulties, this dataset is extra challenging because of Frisian being low-resource, the informal nature of the data, code-switching and non-standard sentence segmentation. As a starting point, two annotators annotated 150 random utterances in three stages of 50 utterances. After each stage, disagreements where discussed and resolved. An increase of 7.8 UAS and 10.5 LAS points was achieved between the first and third round. This paper will focus on the issues that arise when annotating a transcribed speech corpus. To resolve these issues several solutions are proposed.

* RESOURCEFUL-2020 

  Access Paper or Ask Questions

Nanopore Base Calling on the Edge

Nov 09, 2020
Peter Perešíni, Vladimír Boža, Broňa Brejová, Tomáš Vinař

We developed a new base caller DeepNano-coral for nanopore sequencing, which is optimized to run on the Coral Edge Tensor Processing Unit, a small USB-attached hardware accelerator. To achieve this goal, we have designed new versions of two key components used in convolutional neural networks for speech recognition and base calling. In our components, we propose a new way of factorization of a full convolution into smaller operations, which decreases memory access operations, memory access being a bottleneck on this device. DeepNano-coral achieves real-time base calling during sequencing with the accuracy slightly better than the fast mode of the Guppy base caller and is extremely energy efficient, using only 10W of power. Availability: https://github.com/fmfi-compbio/coral-basecaller


  Access Paper or Ask Questions

Pretrained Language Model Embryology: The Birth of ALBERT

Oct 29, 2020
Cheng-Han Chiang, Sung-Feng Huang, Hung-yi Lee

While behaviors of pretrained language models (LMs) have been thoroughly examined, what happened during pretraining is rarely studied. We thus investigate the developmental process from a set of randomly initialized parameters to a totipotent language model, which we refer to as the embryology of a pretrained language model. Our results show that ALBERT learns to reconstruct and predict tokens of different parts of speech (POS) in different learning speeds during pretraining. We also find that linguistic knowledge and world knowledge do not generally improve as pretraining proceeds, nor do downstream tasks' performance. These findings suggest that knowledge of a pretrained model varies during pretraining, and having more pretrain steps does not necessarily provide a model with more comprehensive knowledge. We will provide source codes and pretrained models to reproduce our results at https://github.com/d223302/albert-embryology.

* Accepted to EMNLP 2020, short paper 

  Access Paper or Ask Questions

Multi-modal embeddings using multi-task learning for emotion recognition

Sep 10, 2020
Aparna Khare, Srinivas Parthasarathy, Shiva Sundaram

General embeddings like word2vec, GloVe and ELMo have shown a lot of success in natural language tasks. The embeddings are typically extracted from models that are built on general tasks such as skip-gram models and natural language generation. In this paper, we extend the work from natural language understanding to multi-modal architectures that use audio, visual and textual information for machine learning tasks. The embeddings in our network are extracted using the encoder of a transformer model trained using multi-task training. We use person identification and automatic speech recognition as the tasks in our embedding generation framework. We tune and evaluate the embeddings on the downstream task of emotion recognition and demonstrate that on the CMU-MOSEI dataset, the embeddings can be used to improve over previous state of the art results.

* To appear in Interspeech,2020 

  Access Paper or Ask Questions

Comparative Analysis of Polynomial and Rational Approximations of Hyperbolic Tangent Function for VLSI Implementation

Jul 13, 2020
Mahesh Chandra

Deep neural networks yield the state-of-the-art results in many computer vision and human machine interface applications such as object detection, speech recognition etc. Since, these networks are computationally expensive, customized accelerators are designed for achieving the required performance at lower cost and power. One of the key building blocks of these neural networks is non-linear activation function such as sigmoid, hyperbolic tangent (tanh), and ReLU. A low complexity accurate hardware implementation of the activation function is required to meet the performance and area targets of the neural network accelerators. Even though, various methods and implementations of tanh activation function have been published, a comparative study is missing. This paper presents comparative analysis of polynomial and rational methods and their hardware implementation.

* 6 pages, 5 figures, 3 tables 

  Access Paper or Ask Questions

Adversarial Attacks and Defense on Texts: A Survey

May 31, 2020
Aminul Huq, Mst. Tasnim Pervin

Deep leaning models have been used widely for various purposes in recent years in object recognition, self-driving cars, face recognition, speech recognition, sentiment analysis and many others. However, in recent years it has been shown that these models possess weakness to noises which forces the model to misclassify. This issue has been studied profoundly in image and audio domain. Very little has been studied on this issue with respect to textual data. Even less survey on this topic has been performed to understand different types of attacks and defense techniques. In this manuscript we accumulated and analyzed different attacking techniques, various defense models on how to overcome this issue in order to provide a more comprehensive idea. Later we point out some of the interesting findings of all papers and challenges that need to be overcome in order to move forward in this field.


  Access Paper or Ask Questions

<<
614
615
616
617
618
619
620
621
622
623
624
625
626
>>