Alert button
Picture for Laura Vásquez-Rodríguez

Laura Vásquez-Rodríguez

Alert button

Investigating Text Simplification Evaluation

Jul 28, 2021
Laura Vásquez-Rodríguez, Matthew Shardlow, Piotr Przybyła, Sophia Ananiadou

Figure 1 for Investigating Text Simplification Evaluation
Figure 2 for Investigating Text Simplification Evaluation
Figure 3 for Investigating Text Simplification Evaluation
Figure 4 for Investigating Text Simplification Evaluation

Modern text simplification (TS) heavily relies on the availability of gold standard data to build machine learning models. However, existing studies show that parallel TS corpora contain inaccurate simplifications and incorrect alignments. Additionally, evaluation is usually performed by using metrics such as BLEU or SARI to compare system output to the gold standard. A major limitation is that these metrics do not match human judgements and the performance on different datasets and linguistic phenomena vary greatly. Furthermore, our research shows that the test and training subsets of parallel datasets differ significantly. In this work, we investigate existing TS corpora, providing new insights that will motivate the improvement of existing state-of-the-art TS evaluation methods. Our contributions include the analysis of TS corpora based on existing modifications used for simplification and an empirical study on TS models performance by using better-distributed datasets. We demonstrate that by improving the distribution of TS datasets, we can build more robust TS models.

* Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 876-882  
* 7 pages, 3 figures, 1 table 
Viaarxiv icon