This paper presents the first publicly available version of the Carolina Corpus and discusses its future directions. Carolina is a large open corpus of Brazilian Portuguese texts under construction using web-as-corpus methodology enhanced with provenance, typology, versioning, and text integrality. The corpus aims at being used both as a reliable source for research in Linguistics and as an important resource for Computer Science research on language models, contributing towards removing Portuguese from the set of low-resource languages. Here we present the construction of the corpus methodology, comparing it with other existing methodologies, as well as the corpus current state: Carolina's first public version has $653,322,577$ tokens, distributed over $7$ broad types. Each text is annotated with several different metadata categories in its header, which we developed using TEI annotation standards. We also present ongoing derivative works and invite NLP researchers to contribute with their own.
During the outbreak of COVID-19 pandemic, several research areas joined efforts to mitigate the damages caused by SARS-CoV-2. In this paper we present an interpretability analysis of a convolutional neural network based model for COVID-19 detection in audios. We investigate which features are important for model decision process, investigating spectrograms, F0, F0 standard deviation, sex and age. Following, we analyse model decisions by generating heat maps for the trained models to capture their attention during the decision process. Focusing on a explainable Inteligence Artificial approach, we show that studied models can taken unbiased decisions even in the presence of spurious data in the training set, given the adequate preprocessing steps. Our best model has 94.44% of accuracy in detection, with results indicating that models favors spectrograms for the decision process, particularly, high energy areas in the spectrogram related to prosodic domains, while F0 also leads to efficient COVID-19 detection.
The goal of speech emotion recognition (SER) is to identify the emotional aspects of speech. The SER challenge for Brazilian Portuguese speech was proposed with short snippets of Portuguese which are classified as neutral, non-neutral female and non-neutral male according to paralinguistic elements (laughing, crying, etc). This dataset contains about $50$ minutes of Brazilian Portuguese speech. As the dataset leans on the small side, we investigate whether a combination of transfer learning and data augmentation techniques can produce positive results. Thus, by combining a data augmentation technique called SpecAugment, with the use of Pretrained Audio Neural Networks (PANNs) for transfer learning we are able to obtain interesting results. The PANNs (CNN6, CNN10 and CNN14) are pretrained on a large dataset called AudioSet containing more than $5000$ hours of audio. They were finetuned on the SER dataset and the best performing model (CNN10) on the validation set was submitted to the challenge, achieving an $F1$ score of $0.73$ up from $0.54$ from the baselines provided by the challenge. Moreover, we also tested the use of Transformer neural architecture, pretrained on about $600$ hours of Brazilian Portuguese audio data. Transformers, as well as more complex models of PANNs (CNN14), fail to generalize to the test set in the SER dataset and do not beat the baseline. Considering the limitation of the dataset sizes, currently the best approach for SER is using PANNs (specifically, CNN6 and CNN10).
This work explores speech as a biomarker and investigates the detection of respiratory insufficiency (RI) by analyzing speech samples. Previous work \cite{spira2021} constructed a dataset of respiratory insufficiency COVID-19 patient utterances and analyzed it by means of a convolutional neural network achieving an accuracy of $87.04\%$, validating the hypothesis that one can detect RI through speech. Here, we study how Transformer neural network architectures can improve the performance on RI detection. This approach enables construction of an acoustic model. By choosing the correct pretraining technique, we generate a self-supervised acoustic model, leading to improved performance ($96.53\%$) of Transformers for RI detection.
We introduce CoLN, Combined Learning of Neural network weights, a novel method to securely combine Machine Learning models over sensitive data with no sharing of data. With CoLN, local hosts use the same Neural Network architecture and base parameters to train a model using only locally available data. Locally trained models are then submitted to a combining agent, which produces a combined model. The new model's parameters can be sent back to hosts, and can then be used as initial parameters for a new training iteration. CoLN is capable of combining several distributed neural networks of the same kind but is not restricted to any single neural architecture. In this paper we detail the combination algorithm and present experiments with feed-forward, convolutional, and recurrent Neural Network architectures, showing that the CoLN combined model approximates the performance of a hypothetical ideal centralized model, trained using the combination of the local datasets. CoLN can contribute to secure collaborative research, as required in the medical area, where privacy issues preclude data sharing, but where the limitations of local data demand information derived from larger datasets.
In this work, we carried out a study about the use of attention-based algorithms to automate the categorization of Brazilian case law documents. We used data from the Kollemata Project to produce two distinct datasets with adequate class systems. Then, we implemented a multi-class and multi-label version of BERT and fine-tuned different BERT models with the produced datasets. We evaluated several metrics, adopting the micro-averaged F1-Score as our main metric for which we obtained a performance value of F1-micro=0.72 corresponding to gains of 30 percent points over the tested statistical baseline. In this work, we carried out a study about the use of attention-based algorithms to automate the categorization of Brazilian case law documents. We used data from the \textit{Kollemata} Project to produce two distinct datasets with adequate class systems. Then, we implemented a multi-class and multi-label version of BERT and fine-tuned different BERT models with the produced datasets. We evaluated several metrics, adopting the micro-averaged F1-Score as our main metric for which we obtained a performance value of $\langle \mathcal{F}_1 \rangle_{micro}=0.72$ corresponding to gains of 30 percent points over the tested statistical baseline.
One of the main reasons to employ a description logic such as EL or EL++ is the fact that it has efficient, polynomial-time algorithmic properties such as deciding consistency and inferring subsumption. However, simply by adding negation of concepts to it, we obtain the expressivity of description logics whose decision procedure is {ExpTime}-complete. Similar complexity explosion occurs if we add probability assignments on concepts. To lower the resulting complexity, we instead concentrate on assigning probabilities to Axioms (GCIs). We show that the consistency detection problem for such a probabilistic description logic is NP-complete, and present a linear algebraic deterministic algorithm to solve it, using the column generation technique. We also examine and provide algorithms for the probabilistic extension problem, which consists of inferring the minimum and maximum probabilities for a new axiom, given a consistent probabilistic knowledge base.
Ongoing research on natural language inference where we propose a new set of tasks that require specific capacities over linguistic logical forms such as i) Boolean coordination, ii) quantifiers, iii) definitive description, and iv) counting operators.
In this paper we show several similarities among logic systems that deal simultaneously with deductive and quantitative inference. We claim it is appropriate to call the tasks those systems perform as Quantitative Logic Reasoning. Analogous properties hold throughout that class, for whose members there exists a set of linear algebraic techniques applicable in the study of satisfiability decision problems. In this presentation, we consider as Quantitative Logic Reasoning the tasks performed by propositional Probabilistic Logic; first-order logic with counting quantifiers over a fragment containing unary and limited binary predicates; and propositional Lukasiewicz Infinitely-valued Probabilistic Logic
Semantic parsing can be defined as the process of mapping natural language sentences into a machine interpretable, formal representation of its meaning. Semantic parsing using LSTM encoder-decoder neural networks have become promising approach. However, human automated translation of natural language does not provide grammaticality guarantees for the sentences generate such a guarantee is particularly important for practical cases where a data base query can cause critical errors if the sentence is ungrammatical. In this work, we propose an neural architecture called Encoder CFG-Decoder, whose output conforms to a given context-free grammar. Results are show for any implementation of such architecture display its correctness and providing benchmark accuracy levels better than the literature.