Alert button
Picture for Ondřej Pražák

Ondřej Pražák

Alert button

Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model

Jul 27, 2023
Pavel Přibáň, Ondřej Pražák

Figure 1 for Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model
Figure 2 for Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model
Figure 3 for Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model
Figure 4 for Improving Aspect-Based Sentiment with End-to-End Semantic Role Labeling Model

This paper presents a series of approaches aimed at enhancing the performance of Aspect-Based Sentiment Analysis (ABSA) by utilizing extracted semantic information from a Semantic Role Labeling (SRL) model. We propose a novel end-to-end Semantic Role Labeling model that effectively captures most of the structured semantic information within the Transformer hidden state. We believe that this end-to-end model is well-suited for our newly proposed models that incorporate semantic information. We evaluate the proposed models in two languages, English and Czech, employing ELECTRA-small models. Our combined models improve ABSA performance in both languages. Moreover, we achieved new state-of-the-art results on the Czech ABSA.

* Accepted to RANLP 2023 
Viaarxiv icon

End-to-end Multilingual Coreference Resolution with Mention Head Prediction

Sep 26, 2022
Ondřej Pražák, Miloslav Konopík

Figure 1 for End-to-end Multilingual Coreference Resolution with Mention Head Prediction
Figure 2 for End-to-end Multilingual Coreference Resolution with Mention Head Prediction
Figure 3 for End-to-end Multilingual Coreference Resolution with Mention Head Prediction
Figure 4 for End-to-end Multilingual Coreference Resolution with Mention Head Prediction

This paper describes our approach to the CRAC 2022 Shared Task on Multilingual Coreference Resolution. Our model is based on a state-of-the-art end-to-end coreference resolution system. Apart from joined multilingual training, we improved our results with mention head prediction. We also tried to integrate dependency information into our model. Our system ended up in $3^{rd}$ place. Moreover, we reached the best performance on two datasets out of 13.

Viaarxiv icon

Findings of the Shared Task on Multilingual Coreference Resolution

Sep 16, 2022
Zdeněk Žabokrtský, Miloslav Konopík, Anna Nedoluzhko, Michal Novák, Maciej Ogrodniczuk, Martin Popel, Ondřej Pražák, Jakub Sido, Daniel Zeman, Yilun Zhu

Figure 1 for Findings of the Shared Task on Multilingual Coreference Resolution
Figure 2 for Findings of the Shared Task on Multilingual Coreference Resolution
Figure 3 for Findings of the Shared Task on Multilingual Coreference Resolution
Figure 4 for Findings of the Shared Task on Multilingual Coreference Resolution

This paper presents an overview of the shared task on multilingual coreference resolution associated with the CRAC 2022 workshop. Shared task participants were supposed to develop trainable systems capable of identifying mentions and clustering them according to identity coreference. The public edition of CorefUD 1.0, which contains 13 datasets for 10 languages, was used as the source of training and evaluation data. The CoNLL score used in previous coreference-oriented shared tasks was used as the main evaluation metric. There were 8 coreference prediction systems submitted by 5 participating teams; in addition, there was a competitive Transformer-based baseline system provided by the organizers at the beginning of the shared task. The winner system outperformed the baseline by 12 percentage points (in terms of the CoNLL scores averaged across all datasets for individual languages).

Viaarxiv icon

MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain

Mar 29, 2022
Jan Pašek, Jakub Sido, Miloslav Konopík, Ondřej Pražák

Figure 1 for MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain
Figure 2 for MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain
Figure 3 for MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain
Figure 4 for MQDD: Pre-training of Multimodal Question Duplicity Detection for Software Engineering Domain

This work proposes a new pipeline for leveraging data collected on the Stack Overflow website for pre-training a multimodal model for searching duplicates on question answering websites. Our multimodal model is trained on question descriptions and source codes in multiple programming languages. We design two new learning objectives to improve duplicate detection capabilities. The result of this work is a mature, fine-tuned Multimodal Question Duplicity Detection (MQDD) model, ready to be integrated into a Stack Overflow search system, where it can help users find answers for already answered questions. Alongside the MQDD model, we release two datasets related to the software engineering domain. The first Stack Overflow Dataset (SOD) represents a massive corpus of paired questions and answers. The second Stack Overflow Duplicity Dataset (SODD) contains data for training duplicate detection models.

Viaarxiv icon

Czech News Dataset for Semantic Textual Similarity

Aug 23, 2021
Jakub Sido, Michal Seják, Ondřej Pražák, Miloslav Konopík, Václav Moravec

Figure 1 for Czech News Dataset for Semantic Textual Similarity
Figure 2 for Czech News Dataset for Semantic Textual Similarity
Figure 3 for Czech News Dataset for Semantic Textual Similarity
Figure 4 for Czech News Dataset for Semantic Textual Similarity

This paper describes a novel dataset consisting of sentences with semantic similarity annotations. The data originate from the journalistic domain in the Czech language. We describe the process of collecting and annotating the data in detail. The dataset contains 138,556 human annotations divided into train and test sets. In total, 485 journalism students participated in the creation process. To increase the reliability of the test set, we compute the annotation as an average of 9 individual annotations. We evaluate the quality of the dataset by measuring inter and intra annotation annotators' agreements. Beside agreement numbers, we provide detailed statistics of the collected dataset. We conclude our paper with a baseline experiment of building a system for predicting the semantic similarity of sentences. Due to the massive number of training annotations (116 956), the model can perform significantly better than an average annotator (0,92 versus 0,86 of Person's correlation coefficients).

Viaarxiv icon

Czech News Dataset for Semanic Textual Similarity

Aug 19, 2021
Jakub Sido, Michal Seják, Ondřej Pražák, Miloslav Konopík, Václav Moravec

Figure 1 for Czech News Dataset for Semanic Textual Similarity
Figure 2 for Czech News Dataset for Semanic Textual Similarity
Figure 3 for Czech News Dataset for Semanic Textual Similarity
Figure 4 for Czech News Dataset for Semanic Textual Similarity

This paper describes a novel dataset consisting of sentences with semantic similarity annotations. The data originate from the journalistic domain in the Czech language. We describe the process of collecting and annotating the data in detail. The dataset contains 138,556 human annotations divided into train and test sets. In total, 485 journalism students participated in the creation process. To increase the reliability of the test set, we compute the annotation as an average of 9 individual annotations. We evaluate the quality of the dataset by measuring inter and intra annotation annotators' agreements. Beside agreement numbers, we provide detailed statistics of the collected dataset. We conclude our paper with a baseline experiment of building a system for predicting the semantic similarity of sentences. Due to the massive number of training annotations (116 956), the model can perform significantly better than an average annotator (0,92 versus 0,86 of Person's correlation coefficients).

Viaarxiv icon

Multilingual Coreference Resolution with Harmonized Annotations

Jul 26, 2021
Ondřej Pražák, Miloslav Konopík, Jakub Sido

Figure 1 for Multilingual Coreference Resolution with Harmonized Annotations
Figure 2 for Multilingual Coreference Resolution with Harmonized Annotations
Figure 3 for Multilingual Coreference Resolution with Harmonized Annotations

In this paper, we present coreference resolution experiments with a newly created multilingual corpus CorefUD. We focus on the following languages: Czech, Russian, Polish, German, Spanish, and Catalan. In addition to monolingual experiments, we combine the training data in multilingual experiments and train two joined models -- for Slavic languages and for all the languages together. We rely on an end-to-end deep learning model that we slightly adapted for the CorefUD corpus. Our results show that we can profit from harmonized annotations, and using joined models helps significantly for the languages with smaller training data.

Viaarxiv icon

Czert -- Czech BERT-like Model for Language Representation

Mar 24, 2021
Jakub Sido, Ondřej Pražák, Pavel Přibáň, Jan Pašek, Michal Seják, Miloslav Konopík

Figure 1 for Czert -- Czech BERT-like Model for Language Representation
Figure 2 for Czert -- Czech BERT-like Model for Language Representation
Figure 3 for Czert -- Czech BERT-like Model for Language Representation
Figure 4 for Czert -- Czech BERT-like Model for Language Representation

This paper describes the training process of the first Czech monolingual language representation models based on BERT and ALBERT architectures. We pre-train our models on more than 340K of sentences, which is 50 times more than multilingual models that include Czech data. We outperform the multilingual models on 7 out of 10 datasets. In addition, we establish the new state-of-the-art results on seven datasets. At the end, we discuss properties of monolingual and multilingual models based upon our results. We publish all the pre-trained and fine-tuned models freely for the research community.

* 13 pages 
Viaarxiv icon

UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection

Nov 30, 2020
Ondřej Pražák, Pavel Přibáň, Stephen Taylor, Jakub Sido

Figure 1 for UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection
Figure 2 for UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection
Figure 3 for UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection
Figure 4 for UWB at SemEval-2020 Task 1: Lexical Semantic Change Detection

In this paper, we describe our method for the detection of lexical semantic change, i.e., word sense changes over time. We examine semantic differences between specific words in two corpora, chosen from different time periods, for English, German, Latin, and Swedish. Our method was created for the SemEval 2020 Task 1: \textit{Unsupervised Lexical Semantic Change Detection.} We ranked $1^{st}$ in Sub-task 1: binary change detection, and $4^{th}$ in Sub-task 2: ranked change detection. Our method is fully unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later; computing a linear transformation between earlier and later spaces, using Canonical Correlation Analysis and Orthogonal Transformation; and measuring the cosines between the transformed vector for the target word from the earlier corpus and the vector for the target word in the later corpus.

* arXiv admin note: substantial text overlap with arXiv:2011.14678 
Viaarxiv icon

UWB @ DIACR-Ita: Lexical Semantic Change Detection with CCA and Orthogonal Transformation

Nov 30, 2020
Ondřej Pražák, Pavel Přibáň, Stephen Taylor

Figure 1 for UWB @ DIACR-Ita: Lexical Semantic Change Detection with CCA and Orthogonal Transformation
Figure 2 for UWB @ DIACR-Ita: Lexical Semantic Change Detection with CCA and Orthogonal Transformation

In this paper, we describe our method for detection of lexical semantic change (i.e., word sense changes over time) for the DIACR-Ita shared task, where we ranked $1^{st}$. We examine semantic differences between specific words in two Italian corpora, chosen from different time periods. Our method is fully unsupervised and language independent. It consists of preparing a semantic vector space for each corpus, earlier and later. Then we compute a linear transformation between earlier and later spaces, using CCA and Orthogonal Transformation. Finally, we measure the cosines between the transformed vectors.

Viaarxiv icon