Alert button
Picture for Chunlan Ma

Chunlan Ma

Alert button

Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages

May 26, 2023
Ayyoob Imani, Peiqin Lin, Amir Hossein Kargaran, Silvia Severini, Masoud Jalili Sabet, Nora Kassner, Chunlan Ma, Helmut Schmid, André F. T. Martins, François Yvon, Hinrich Schütze

Figure 1 for Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
Figure 2 for Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
Figure 3 for Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages
Figure 4 for Glot500: Scaling Multilingual Corpora and Language Models to 500 Languages

The NLP community has mainly focused on scaling Large Language Models (LLMs) vertically, i.e., making them better for about 100 languages. We instead scale LLMs horizontally: we create, through continued pretraining, Glot500-m, an LLM that covers 511 predominantly low-resource languages. An important part of this effort is to collect and clean Glot500-c, a corpus that covers these 511 languages and allows us to train Glot500-m. We evaluate Glot500-m on five diverse tasks across these languages. We observe large improvements for both high-resource and low-resource languages compared to an XLM-R baseline. Our analysis shows that no single factor explains the quality of multilingual LLM representations. Rather, a combination of factors determines quality including corpus size, script, "help" from related languages and the total capacity of the model. Our work addresses an important goal of NLP research: we should not limit NLP to a small fraction of the world's languages and instead strive to support as many languages as possible to bring the benefits of NLP technology to all languages and cultures. Code, data and models are available at https://github.com/cisnlp/Glot500.

* ACL 2023 
Viaarxiv icon

Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages

May 15, 2023
Chunlan Ma, Ayyoob ImaniGooghari, Haotian Ye, Ehsaneddin Asgari, Hinrich Schütze

Figure 1 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 2 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 3 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages
Figure 4 for Taxi1500: A Multilingual Dataset for Text Classification in 1500 Languages

While natural language processing tools have been developed extensively for some of the world's languages, a significant portion of the world's over 7000 languages are still neglected. One reason for this is that evaluation datasets do not yet cover a wide range of languages, including low-resource and endangered ones. We aim to address this issue by creating a text classification dataset encompassing a large number of languages, many of which currently have little to no annotated data available. We leverage parallel translations of the Bible to construct such a dataset by first developing applicable topics and employing a crowdsourcing tool to collect annotated data. By annotating the English side of the data and projecting the labels onto other languages through aligned verses, we generate text classification datasets for more than 1500 languages. We extensively benchmark several existing multilingual language models using our dataset. To facilitate the advancement of research in this area, we will release our dataset and code.

Viaarxiv icon