Alert button
Picture for Piotr Pęzik

Piotr Pęzik

Alert button

Keyword Extraction from Short Texts with~a~Text-To-Text Transfer Transformer

Sep 28, 2022
Piotr Pęzik, Agnieszka Mikołajczyk-Bareła, Adam Wawrzyński, Bartłomiej Nitoń, Maciej Ogrodniczuk

Figure 1 for Keyword Extraction from Short Texts with~a~Text-To-Text Transfer Transformer
Figure 2 for Keyword Extraction from Short Texts with~a~Text-To-Text Transfer Transformer
Figure 3 for Keyword Extraction from Short Texts with~a~Text-To-Text Transfer Transformer
Figure 4 for Keyword Extraction from Short Texts with~a~Text-To-Text Transfer Transformer

The paper explores the relevance of the Text-To-Text Transfer Transformer language model (T5) for Polish (plT5) to the task of intrinsic and extrinsic keyword extraction from short text passages. The evaluation is carried out on the new Polish Open Science Metadata Corpus (POSMAC), which is released with this paper: a collection of 216,214 abstracts of scientific publications compiled in the CURLICAT project. We compare the results obtained by four different methods, i.e. plT5kw, extremeText, TermoPL, KeyBERT and conclude that the plT5kw model yields particularly promising results for both frequent and sparsely represented keywords. Furthermore, a plT5kw keyword generation model trained on the POSMAC also seems to produce highly useful results in cross-domain text labelling scenarios. We discuss the performance of the model on news stories and phone-based dialog transcripts which represent text genres and domains extrinsic to the dataset of scientific abstracts. Finally, we also attempt to characterize the challenges of evaluating a text-to-text model on both intrinsic and extrinsic keyword extraction.

* Accepted to ACIIDS 2022. The proceedings of ACIIDS 2022 will be published by Springer in series Lecture Notes in Artificial Intelligence (LNAI) and Communications in Computer and Information Science (CCIS) 
Viaarxiv icon

Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios

Sep 13, 2021
Raghavendra Pappagari, Piotr Żelasko, Agnieszka Mikołajczyk, Piotr Pęzik, Najim Dehak

Figure 1 for Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios
Figure 2 for Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios
Figure 3 for Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios
Figure 4 for Joint prediction of truecasing and punctuation for conversational speech in low-resource scenarios

Capitalization and punctuation are important cues for comprehending written texts and conversational transcripts. Yet, many ASR systems do not produce punctuated and case-formatted speech transcripts. We propose to use a multi-task system that can exploit the relations between casing and punctuation to improve their prediction performance. Whereas text data for predicting punctuation and truecasing is seemingly abundant, we argue that written text resources are inadequate as training data for conversational models. We quantify the mismatch between written and conversational text domains by comparing the joint distributions of punctuation and word cases, and by testing our model cross-domain. Further, we show that by training the model in the written text domain and then transfer learning to conversations, we can achieve reasonable performance with less data.

* Accepted for ASRU 2021 
Viaarxiv icon