Alert button
Picture for Shruti Rijhwani

Shruti Rijhwani

Alert button

XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages

May 24, 2023
Sebastian Ruder, Jonathan H. Clark, Alexander Gutkin, Mihir Kale, Min Ma, Massimo Nicosia, Shruti Rijhwani, Parker Riley, Jean-Michel A. Sarr, Xinyi Wang, John Wieting, Nitish Gupta, Anna Katanova, Christo Kirov, Dana L. Dickinson, Brian Roark, Bidisha Samanta, Connie Tao, David I. Adelani, Vera Axelrod, Isaac Caswell, Colin Cherry, Dan Garrette, Reeve Ingle, Melvin Johnson, Dmitry Panteleev, Partha Talukdar

Figure 1 for XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Figure 2 for XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Figure 3 for XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages
Figure 4 for XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages

Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) -- languages for which NLP re-search is particularly far behind in meeting user needs -- it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks -- tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text-only, multi-modal (vision, audio, and text),supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models

Viaarxiv icon

SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation

May 22, 2023
Elizabeth Clark, Shruti Rijhwani, Sebastian Gehrmann, Joshua Maynez, Roee Aharoni, Vitaly Nikolaev, Thibault Sellam, Aditya Siddhant, Dipanjan Das, Ankur P. Parikh

Figure 1 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 2 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 3 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation
Figure 4 for SEAHORSE: A Multilingual, Multifaceted Dataset for Summarization Evaluation

Reliable automatic evaluation of summarization systems is challenging due to the multifaceted and subjective nature of the task. This is especially the case for languages other than English, where human evaluations are scarce. In this work, we introduce SEAHORSE, a dataset for multilingual, multifaceted summarization evaluation. SEAHORSE consists of 96K summaries with human ratings along 6 quality dimensions: comprehensibility, repetition, grammar, attribution, main ideas, and conciseness, covering 6 languages, 9 systems and 4 datasets. As a result of its size and scope, SEAHORSE can serve both as a benchmark to evaluate learnt metrics, as well as a large-scale resource for training such metrics. We show that metrics trained with SEAHORSE achieve strong performance on the out-of-domain meta-evaluation benchmarks TRUE (Honovich et al., 2022) and mFACE (Aharoni et al., 2022). We make SEAHORSE publicly available for future research on multilingual and multifaceted summarization evaluation.

Viaarxiv icon

User-Centric Evaluation of OCR Systems for Kwak'wala

Feb 26, 2023
Shruti Rijhwani, Daisy Rosenblum, Michayla King, Antonios Anastasopoulos, Graham Neubig

Figure 1 for User-Centric Evaluation of OCR Systems for Kwak'wala
Figure 2 for User-Centric Evaluation of OCR Systems for Kwak'wala
Figure 3 for User-Centric Evaluation of OCR Systems for Kwak'wala
Figure 4 for User-Centric Evaluation of OCR Systems for Kwak'wala

There has been recent interest in improving optical character recognition (OCR) for endangered languages, particularly because a large number of documents and books in these languages are not in machine-readable formats. The performance of OCR systems is typically evaluated using automatic metrics such as character and word error rates. While error rates are useful for the comparison of different models and systems, they do not measure whether and how the transcriptions produced from OCR tools are useful to downstream users. In this paper, we present a human-centric evaluation of OCR systems, focusing on the Kwak'wala language as a case study. With a user study, we show that utilizing OCR reduces the time spent in the manual transcription of culturally valuable documents -- a task that is often undertaken by endangered language community members and researchers -- by over 50%. Our results demonstrate the potential benefits that OCR tools can have on downstream language documentation and revitalization efforts.

* Accepted to the Sixth Workshop on Computational Methods in the Study of Endangered Languages (ComputEL 2023) 
Viaarxiv icon

Lexically Aware Semi-Supervised Learning for OCR Post-Correction

Nov 04, 2021
Shruti Rijhwani, Daisy Rosenblum, Antonios Anastasopoulos, Graham Neubig

Figure 1 for Lexically Aware Semi-Supervised Learning for OCR Post-Correction
Figure 2 for Lexically Aware Semi-Supervised Learning for OCR Post-Correction
Figure 3 for Lexically Aware Semi-Supervised Learning for OCR Post-Correction
Figure 4 for Lexically Aware Semi-Supervised Learning for OCR Post-Correction

Much of the existing linguistic data in many languages of the world is locked away in non-digitized books and documents. Optical character recognition (OCR) can be used to produce digitized text, and previous work has demonstrated the utility of neural post-correction methods that improve the results of general-purpose OCR systems on recognition of less-well-resourced languages. However, these methods rely on manually curated post-correction data, which are relatively scarce compared to the non-annotated raw images that need to be digitized. In this paper, we present a semi-supervised learning method that makes it possible to utilize these raw images to improve performance, specifically through the use of self-training, a technique where a model is iteratively trained on its own outputs. In addition, to enforce consistency in the recognized vocabulary, we introduce a lexically-aware decoding method that augments the neural post-correction model with a count-based language model constructed from the recognized texts, implemented using weighted finite-state automata (WFSA) for efficient and effective decoding. Results on four endangered languages demonstrate the utility of the proposed method, with relative error reductions of 15-29%, where we find the combination of self-training and lexically-aware decoding essential for achieving consistent improvements. Data and code are available at https://shrutirij.github.io/ocr-el/.

* Accepted to the Transactions of the Association for Computational Linguistics (TACL) 
Viaarxiv icon

Dependency Induction Through the Lens of Visual Perception

Sep 20, 2021
Ruisi Su, Shruti Rijhwani, Hao Zhu, Junxian He, Xinyu Wang, Yonatan Bisk, Graham Neubig

Figure 1 for Dependency Induction Through the Lens of Visual Perception
Figure 2 for Dependency Induction Through the Lens of Visual Perception
Figure 3 for Dependency Induction Through the Lens of Visual Perception
Figure 4 for Dependency Induction Through the Lens of Visual Perception

Most previous work on grammar induction focuses on learning phrasal or dependency structure purely from text. However, because the signal provided by text alone is limited, recently introduced visually grounded syntax models make use of multimodal information leading to improved performance in constituency grammar induction. However, as compared to dependency grammars, constituency grammars do not provide a straightforward way to incorporate visual information without enforcing language-specific heuristics. In this paper, we propose an unsupervised grammar induction model that leverages word concreteness and a structural vision-based heuristic to jointly learn constituency-structure and dependency-structure grammars. Our experiments find that concreteness is a strong indicator for learning dependency grammars, improving the direct attachment score (DAS) by over 50\% as compared to state-of-the-art models trained on pure text. Next, we propose an extension of our model that leverages both word concreteness and visual semantic role labels in constituency and dependency parsing. Our experiments show that the proposed extension outperforms the current state-of-the-art visually grounded models in constituency parsing even with a smaller grammar size.

* Accepted to CoNLL 2021 
Viaarxiv icon

Evaluating the Morphosyntactic Well-formedness of Generated Texts

Mar 30, 2021
Adithya Pratapa, Antonios Anastasopoulos, Shruti Rijhwani, Aditi Chaudhary, David R. Mortensen, Graham Neubig, Yulia Tsvetkov

Figure 1 for Evaluating the Morphosyntactic Well-formedness of Generated Texts
Figure 2 for Evaluating the Morphosyntactic Well-formedness of Generated Texts
Figure 3 for Evaluating the Morphosyntactic Well-formedness of Generated Texts
Figure 4 for Evaluating the Morphosyntactic Well-formedness of Generated Texts

Text generation systems are ubiquitous in natural language processing applications. However, evaluation of these systems remains a challenge, especially in multilingual settings. In this paper, we propose L'AMBRE -- a metric to evaluate the morphosyntactic well-formedness of text using its dependency parse and morphosyntactic rules of the language. We present a way to automatically extract various rules governing morphosyntax directly from dependency treebanks. To tackle the noisy outputs from text generation systems, we propose a simple methodology to train robust parsers. We show the effectiveness of our metric on the task of machine translation through a diachronic study of systems translating into morphologically-rich languages.

Viaarxiv icon

MasakhaNER: Named Entity Recognition for African Languages

Mar 22, 2021
David Ifeoluwa Adelani, Jade Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Sebastian Ruder, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen Muhammad, Chris Chinenye Emezue, Joyce Nakatumba-Nabende, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye, Jesujoba Alabi, Seid Muhie Yimam, Tajuddeen Gwadabe, Ignatius Ezeani, Rubungo Andre Niyongabo, Jonathan Mukiibi, Verrah Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Chukwuneke, Nkiruka Odu, Eric Peter Wairagala, Samuel Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MBOUP, Dibora Gebreyohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye Faye, Blessing Sibanda, Orevaoghene Ahia, Bonaventure F. P. Dossou, Kelechi Ogueji, Thierno Ibrahima DIOP, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Marengereke, Salomey Osei

Figure 1 for MasakhaNER: Named Entity Recognition for African Languages
Figure 2 for MasakhaNER: Named Entity Recognition for African Languages
Figure 3 for MasakhaNER: Named Entity Recognition for African Languages
Figure 4 for MasakhaNER: Named Entity Recognition for African Languages

We take a step towards addressing the under-representation of the African continent in NLP research by creating the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages, bringing together a variety of stakeholders. We detail characteristics of the languages to help researchers understand the challenges that these languages pose for NER. We analyze our datasets and conduct an extensive empirical evaluation of state-of-the-art methods across both supervised and transfer learning settings. We release the data, code, and models in order to inspire future research on African NLP.

* Accepted at the AfricaNLP Workshop @EACL 2021 
Viaarxiv icon

OCR Post Correction for Endangered Language Texts

Nov 10, 2020
Shruti Rijhwani, Antonios Anastasopoulos, Graham Neubig

Figure 1 for OCR Post Correction for Endangered Language Texts
Figure 2 for OCR Post Correction for Endangered Language Texts
Figure 3 for OCR Post Correction for Endangered Language Texts
Figure 4 for OCR Post Correction for Endangered Language Texts

There is little to no data available to build natural language processing models for most endangered languages. However, textual data in these languages often exists in formats that are not machine-readable, such as paper books and scanned images. In this work, we address the task of extracting text from these resources. We create a benchmark dataset of transcriptions for scanned books in three critically endangered languages and present a systematic analysis of how general-purpose OCR tools are not robust to the data-scarce setting of endangered languages. We develop an OCR post-correction method tailored to ease training in this data-scarce setting, reducing the recognition error rate by 34% on average across the three languages.

* Accepted to EMNLP 2020 
Viaarxiv icon

Soft Gazetteers for Low-Resource Named Entity Recognition

May 04, 2020
Shruti Rijhwani, Shuyan Zhou, Graham Neubig, Jaime Carbonell

Figure 1 for Soft Gazetteers for Low-Resource Named Entity Recognition
Figure 2 for Soft Gazetteers for Low-Resource Named Entity Recognition
Figure 3 for Soft Gazetteers for Low-Resource Named Entity Recognition
Figure 4 for Soft Gazetteers for Low-Resource Named Entity Recognition

Traditional named entity recognition models use gazetteers (lists of entities) as features to improve performance. Although modern neural network models do not require such hand-crafted features for strong performance, recent work has demonstrated their utility for named entity recognition on English data. However, designing such features for low-resource languages is challenging, because exhaustive entity gazetteers do not exist in these languages. To address this problem, we propose a method of "soft gazetteers" that incorporates ubiquitously available information from English knowledge bases, such as Wikipedia, into neural named entity recognition models through cross-lingual entity linking. Our experiments on four low-resource languages show an average improvement of 4 points in F1 score. Code and data are available at https://github.com/neulab/soft-gazetteers.

* Accepted at ACL 2020 
Viaarxiv icon

Practical Comparable Data Collection for Low-Resource Languages via Images

Apr 28, 2020
Aman Madaan, Shruti Rijhwani, Antonios Anastasopoulos, Yiming Yang, Graham Neubig

Figure 1 for Practical Comparable Data Collection for Low-Resource Languages via Images
Figure 2 for Practical Comparable Data Collection for Low-Resource Languages via Images
Figure 3 for Practical Comparable Data Collection for Low-Resource Languages via Images
Figure 4 for Practical Comparable Data Collection for Low-Resource Languages via Images

We propose a method of curating high-quality comparable training data for low-resource languages with monolingual annotators. Our method involves using a carefully selected set of images as a pivot between the source and target languages by getting captions for such images in both languages independently. Human evaluations on the English-Hindi comparable corpora created with our method show that 81.1% of the pairs are acceptable translations, and only 2.47% of the pairs are not translations at all. We further establish the potential of the dataset collected through our approach by experimenting on two downstream tasks - machine translation and dictionary extraction. All code and data are available at https://github.com/madaan/PML4DC-Comparable-Data-Collection.

* Accepted for poster presentation at the Practical Machine Learning for Developing Countries (PML4DC) workshop, ICLR 2020 
Viaarxiv icon