Alert button
Picture for Fernando Alva-Manchego

Fernando Alva-Manchego

Alert button

BLESS: Benchmarking Large Language Models on Sentence Simplification

Oct 24, 2023
Tannon Kew, Alison Chi, Laura Vásquez-Rodríguez, Sweta Agrawal, Dennis Aumiller, Fernando Alva-Manchego, Matthew Shardlow

We present BLESS, a comprehensive performance benchmark of the most recent state-of-the-art large language models (LLMs) on the task of text simplification (TS). We examine how well off-the-shelf LLMs can solve this challenging task, assessing a total of 44 models, differing in size, architecture, pre-training methods, and accessibility, on three test sets from different domains (Wikipedia, news, and medical) under a few-shot setting. Our analysis considers a suite of automatic metrics as well as a large-scale quantitative investigation into the types of common edit operations performed by the different models. Furthermore, we perform a manual qualitative analysis on a subset of model outputs to better gauge the quality of the generated simplifications. Our evaluation indicates that the best LLMs, despite not being trained on TS, perform comparably with state-of-the-art TS baselines. Additionally, we find that certain LLMs demonstrate a greater range and diversity of edit operations. Our performance benchmark will be available as a resource for the development of future TS methods and evaluation metrics.

* This paper has been accepted to EMNLP 2023 as a main long paper. 9 pages, 7 figures 
Viaarxiv icon

A Practical Toolkit for Multilingual Question and Answer Generation

May 27, 2023
Asahi Ushio, Fernando Alva-Manchego, Jose Camacho-Collados

Figure 1 for A Practical Toolkit for Multilingual Question and Answer Generation
Figure 2 for A Practical Toolkit for Multilingual Question and Answer Generation
Figure 3 for A Practical Toolkit for Multilingual Question and Answer Generation
Figure 4 for A Practical Toolkit for Multilingual Question and Answer Generation

Generating questions along with associated answers from a text has applications in several domains, such as creating reading comprehension tests for students, or improving document search by providing auxiliary questions and answers based on the query. Training models for question and answer generation (QAG) is not straightforward due to the expected structured output (i.e. a list of question and answer pairs), as it requires more than generating a single sentence. This results in a small number of publicly accessible QAG models. In this paper, we introduce AutoQG, an online service for multilingual QAG, along with lmqg, an all-in-one Python package for model fine-tuning, generation, and evaluation. We also release QAG models in eight languages fine-tuned on a few variants of pre-trained encoder-decoder language models, which can be used online via AutoQG or locally via lmqg. With these resources, practitioners of any level can benefit from a toolkit that includes a web interface for end users, and easy-to-use code for developers who require custom models or fine-grained controls for generation.

* Accepted by ACL 2023 System Demonstration 
Viaarxiv icon

An Empirical Comparison of LM-based Question and Answer Generation Methods

May 26, 2023
Asahi Ushio, Fernando Alva-Manchego, Jose Camacho-Collados

Figure 1 for An Empirical Comparison of LM-based Question and Answer Generation Methods
Figure 2 for An Empirical Comparison of LM-based Question and Answer Generation Methods
Figure 3 for An Empirical Comparison of LM-based Question and Answer Generation Methods
Figure 4 for An Empirical Comparison of LM-based Question and Answer Generation Methods

Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context (e.g. a paragraph). This task has a variety of applications, such as data augmentation for question answering (QA) models, information retrieval and education. In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning. Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches. However, there are differences depending on the underlying generative LM. Finally, our analysis shows that QA models fine-tuned solely on generated question-answer pairs can be competitive when compared to supervised QA models trained on human-labeled data.

* Accepted by ACL 2023 Findings 
Viaarxiv icon

Generative Language Models for Paragraph-Level Question Generation

Oct 08, 2022
Asahi Ushio, Fernando Alva-Manchego, Jose Camacho-Collados

Figure 1 for Generative Language Models for Paragraph-Level Question Generation
Figure 2 for Generative Language Models for Paragraph-Level Question Generation
Figure 3 for Generative Language Models for Paragraph-Level Question Generation
Figure 4 for Generative Language Models for Paragraph-Level Question Generation

Powerful generative models have led to recent progress in question generation (QG). However, it is difficult to measure advances in QG research since there are no standardized resources that allow a uniform comparison among approaches. In this paper, we introduce QG-Bench, a multilingual and multidomain benchmark for QG that unifies existing question answering datasets by converting them to a standard QG setting. It includes general-purpose datasets such as SQuAD for English, datasets from ten domains and two styles, as well as datasets in eight different languages. Using QG-Bench as a reference, we perform an extensive analysis of the capabilities of language models for the task. First, we propose robust QG baselines based on fine-tuning generative language models. Then, we complement automatic evaluation based on standard metrics with an extensive manual evaluation, which in turn sheds light on the difficulty of evaluating QG models. Finally, we analyse both the domain adaptability of these models as well as the effectiveness of multilingual models in languages other than English. QG-Bench is released along with the fine-tuned models presented in the paper https://github.com/asahi417/lm-question-generation, which are also available as a demo https://autoqg.net/.

* EMNLP 2022 main conference 
Viaarxiv icon

Knowledge Distillation for Quality Estimation

Jul 01, 2021
Amit Gajbhiye, Marina Fomicheva, Fernando Alva-Manchego, Frédéric Blain, Abiola Obamuyide, Nikolaos Aletras, Lucia Specia

Figure 1 for Knowledge Distillation for Quality Estimation
Figure 2 for Knowledge Distillation for Quality Estimation
Figure 3 for Knowledge Distillation for Quality Estimation
Figure 4 for Knowledge Distillation for Quality Estimation

Quality Estimation (QE) is the task of automatically predicting Machine Translation quality in the absence of reference translations, making it applicable in real-time settings, such as translating online social media conversations. Recent success in QE stems from the use of multilingual pre-trained representations, where very large models lead to impressive results. However, the inference time, disk and memory requirements of such models do not allow for wide usage in the real world. Models trained on distilled pre-trained representations remain prohibitively large for many usage scenarios. We instead propose to directly transfer knowledge from a strong QE teacher model to a much smaller model with a different, shallower architecture. We show that this approach, in combination with data augmentation, leads to light-weight QE models that perform competitively with distilled pre-trained representations with 8x fewer parameters.

* ACL Findings 2021 
Viaarxiv icon

Controllable Text Simplification with Explicit Paraphrasing

Oct 21, 2020
Mounica Maddela, Fernando Alva-Manchego, Wei Xu

Figure 1 for Controllable Text Simplification with Explicit Paraphrasing
Figure 2 for Controllable Text Simplification with Explicit Paraphrasing
Figure 3 for Controllable Text Simplification with Explicit Paraphrasing
Figure 4 for Controllable Text Simplification with Explicit Paraphrasing

Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously. However, such systems limit themselves to mostly deleting words and cannot easily adapt to the requirements of different target audiences. In this paper, we propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles. We introduce a new data augmentation method to improve the paraphrasing capability of our model. Through automatic and manual evaluations, we show that our proposed model establishes a new state-of-the-art for the task, paraphrasing more often than the existing systems, and can control the degree of each simplification operation applied to the input texts.

Viaarxiv icon

ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations

May 01, 2020
Fernando Alva-Manchego, Louis Martin, Antoine Bordes, Carolina Scarton, Benoît Sagot, Lucia Specia

Figure 1 for ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
Figure 2 for ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
Figure 3 for ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
Figure 4 for ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations

In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possible text alterations, current models for automatic sentence simplification are evaluated using datasets that are focused on a single transformation, such as lexical paraphrasing or splitting. This makes it impossible to understand the ability of simplification models in more realistic settings. To alleviate this limitation, this paper introduces ASSET, a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification was produced by executing several rewriting transformations. Through quantitative and qualitative experiments, we show that simplifications in ASSET are better at capturing characteristics of simplicity when compared to other standard evaluation datasets for the task. Furthermore, we motivate the need for developing better methods for automatic evaluation using ASSET, since we show that current popular metrics may not be suitable when multiple simplification transformations are performed.

* Accepted to ACL 2020 (camera-ready version) 
Viaarxiv icon

EASSE: Easier Automatic Sentence Simplification Evaluation

Sep 13, 2019
Fernando Alva-Manchego, Louis Martin, Carolina Scarton, Lucia Specia

Figure 1 for EASSE: Easier Automatic Sentence Simplification Evaluation
Figure 2 for EASSE: Easier Automatic Sentence Simplification Evaluation
Figure 3 for EASSE: Easier Automatic Sentence Simplification Evaluation
Figure 4 for EASSE: Easier Automatic Sentence Simplification Evaluation

We introduce EASSE, a Python package aiming to facilitate and standardise automatic evaluation and comparison of Sentence Simplification (SS) systems. EASSE provides a single access point to a broad range of evaluation resources: standard automatic metrics for assessing SS outputs (e.g. SARI), word-level accuracy scores for certain simplification transformations, reference-independent quality estimation features (e.g. compression ratio), and standard test data for SS evaluation (e.g. TurkCorpus). Finally, EASSE generates easy-to-visualise reports on the various metrics and features above and on how a particular SS output fares against reference simplifications. Through experiments, we show that these functionalities allow for better comparison and understanding of the performance of SS systems.

* EMNLP-IJCNLP 2019 Demo (Camera-ready Version) 
Viaarxiv icon

Strong Baselines for Complex Word Identification across Multiple Languages

Apr 11, 2019
Pierre Finnimore, Elisabeth Fritzsch, Daniel King, Alison Sneyd, Aneeq Ur Rehman, Fernando Alva-Manchego, Andreas Vlachos

Figure 1 for Strong Baselines for Complex Word Identification across Multiple Languages
Figure 2 for Strong Baselines for Complex Word Identification across Multiple Languages
Figure 3 for Strong Baselines for Complex Word Identification across Multiple Languages
Figure 4 for Strong Baselines for Complex Word Identification across Multiple Languages

Complex Word Identification (CWI) is the task of identifying which words or phrases in a sentence are difficult to understand by a target audience. The latest CWI Shared Task released data for two settings: monolingual (i.e. train and test in the same language) and cross-lingual (i.e. test in a language not seen during training). The best monolingual models relied on language-dependent features, which do not generalise in the cross-lingual setting, while the best cross-lingual model used neural networks with multi-task learning. In this paper, we present monolingual and cross-lingual CWI models that perform as well as (or better than) most models submitted to the latest CWI Shared Task. We show that carefully selected features and simple learning models can achieve state-of-the-art performance, and result in strong baselines for future development in this area. Finally, we discuss how inconsistencies in the annotation of the data can explain some of the results obtained.

* NAACL 2019 
Viaarxiv icon