Alert button
Picture for Alexander Fraser

Alexander Fraser

Alert button

Multilingual Word Embeddings for Low-Resource Languages using Anchors and a Chain of Related Languages

Nov 21, 2023
Viktor Hangya, Silvia Severini, Radoslav Ralev, Alexander Fraser, Hinrich Schütze

Very low-resource languages, having only a few million tokens worth of data, are not well-supported by multilingual NLP approaches due to poor quality cross-lingual word representations. Recent work showed that good cross-lingual performance can be achieved if a source language is related to the low-resource target language. However, not all language pairs are related. In this paper, we propose to build multilingual word embeddings (MWEs) via a novel language chain-based approach, that incorporates intermediate related languages to bridge the gap between the distant source and target. We build MWEs one language at a time by starting from the resource rich source and sequentially adding each language in the chain till we reach the target. We extend a semi-joint bilingual approach to multiple languages in order to eliminate the main weakness of previous works, i.e., independently trained monolingual embeddings, by anchoring the target language around the multilingual space. We evaluate our method on bilingual lexicon induction for 4 language families, involving 4 very low-resource (<5M tokens) and 4 moderately low-resource (<50M) target languages, showing improved performance in both categories. Additionally, our analysis reveals the importance of good quality embeddings for intermediate languages as well as the importance of leveraging anchor points from all languages in the multilingual space.

* Accepted at the MRL 2023 workshop 
Viaarxiv icon

Extending Multilingual Machine Translation through Imitation Learning

Nov 14, 2023
Wen Lai, Viktor Hangya, Alexander Fraser

Despite the growing variety of languages supported by existing multilingual neural machine translation (MNMT) models, most of the world's languages are still being left behind. We aim to extend large-scale MNMT models to a new language, allowing for translation between the newly added and all of the already supported languages in a challenging scenario: using only a parallel corpus between the new language and English. Previous approaches, such as continued training on parallel data including the new language, suffer from catastrophic forgetting (i.e., performance on other languages is reduced). Our novel approach Imit-MNMT treats the task as an imitation learning process, which mimicks the behavior of an expert, a technique widely used in the computer vision area, but not well explored in NLP. More specifically, we construct a pseudo multi-parallel corpus of the new and the original languages by pivoting through English, and imitate the output distribution of the original MNMT model. Extensive experiments show that our approach significantly improves the translation performance between the new and the original languages, without severe catastrophic forgetting. We also demonstrate that our approach is capable of solving copy and off-target problems, which are two common issues existence in current large-scale MNMT models.

Viaarxiv icon

Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity

Jun 07, 2023
Katharina Hämmerl, Alina Fastowski, Jindřich Libovický, Alexander Fraser

Figure 1 for Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity
Figure 2 for Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity
Figure 3 for Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity
Figure 4 for Exploring Anisotropy and Outliers in Multilingual Language Models for Cross-Lingual Semantic Sentence Similarity

Previous work has shown that the representations output by contextual language models are more anisotropic than static type embeddings, and typically display outlier dimensions. This seems to be true for both monolingual and multilingual models, although much less work has been done on the multilingual context. Why these outliers occur and how they affect the representations is still an active area of research. We investigate outlier dimensions and their relationship to anisotropy in multiple pre-trained multilingual language models. We focus on cross-lingual semantic similarity tasks, as these are natural tasks for evaluating multilingual representations. Specifically, we examine sentence representations. Sentence transformers which are fine-tuned on parallel resources (that are not always available) perform better on this task, and we show that their representations are more isotropic. However, we aim to improve multilingual representations in general. We investigate how much of the performance difference can be made up by only transforming the embedding space without fine-tuning, and visualise the resulting spaces. We test different operations: Removing individual outlier dimensions, cluster-based isotropy enhancement, and ZCA whitening. We publish our code for reproducibility.

* To appear in ACL Findings 2023. Fixed a citation in this version 
Viaarxiv icon

On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

Jun 04, 2023
Yihong Liu, Alexandra Chronopoulou, Hinrich Schütze, Alexander Fraser

Figure 1 for On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss
Figure 2 for On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss
Figure 3 for On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss
Figure 4 for On the Copying Problem of Unsupervised NMT: A Training Schedule with a Language Discriminator Loss

Although unsupervised neural machine translation (UNMT) has achieved success in many language pairs, the copying problem, i.e., directly copying some parts of the input sentence as the translation, is common among distant language pairs, especially when low-resource languages are involved. We find this issue is closely related to an unexpected copying behavior during online back-translation (BT). In this work, we propose a simple but effective training schedule that incorporates a language discriminator loss. The loss imposes constraints on the intermediate translation so that the translation is in the desired language. By conducting extensive experiments on different language pairs, including similar and distant, high and low-resource languages, we find that our method alleviates the copying problem, thus improving the translation performance on low-resource languages.

* IWSLT 2023 
Viaarxiv icon

How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have

May 23, 2023
Viktor Hangya, Alexander Fraser

Figure 1 for How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
Figure 2 for How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
Figure 3 for How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have
Figure 4 for How to Solve Few-Shot Abusive Content Detection Using the Data We Actually Have

Due to the broad range of social media platforms and their user groups, the requirements of abusive language detection systems are varied and ever-changing. Already a large set of annotated corpora with different properties and label sets were created, such as hate or misogyny detection, but the form and targets of abusive speech are constantly changing. Since, the annotation of new corpora is expensive, in this work we leverage datasets we already have, covering a wide range of tasks related to abusive language detection, in order to build models cheaply for a new target label set and/or language, using only a few training examples of the target domain. We propose a two-step approach: first we train our model in a multitask fashion. We then carry out few-shot adaptation to the target requirements. Our experiments show that by leveraging already existing datasets and only a few-shots of the target task the performance of models can be improved not only monolingually but across languages as well. Our analysis also shows that our models acquire a general understanding of abusive language, since they improve the prediction of labels which are present only in the target dataset. We also analyze the trade-off between specializing the already existing datasets to a given target setup for best performance and its negative effects on model adaptability.

Viaarxiv icon

Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation

May 22, 2023
Wen Lai, Alexandra Chronopoulou, Alexander Fraser

Figure 1 for Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation
Figure 2 for Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation
Figure 3 for Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation
Figure 4 for Mitigating Data Imbalance and Representation Degeneration in Multilingual Machine Translation

Despite advances in multilingual neural machine translation (MNMT), we argue that there are still two major challenges in this area: data imbalance and representation degeneration. The data imbalance problem refers to the imbalance in the amount of parallel corpora for all language pairs, especially for long-tail languages (i.e., very low-resource languages). The representation degeneration problem refers to the problem of encoded tokens tending to appear only in a small subspace of the full space available to the MNMT model. To solve these two issues, we propose Bi-ACL, a framework that uses only target-side monolingual data and a bilingual dictionary to improve the performance of the MNMT model. We define two modules, named bidirectional autoencoder and bidirectional contrastive learning, which we combine with an online constrained beam search and a curriculum learning sampling strategy. Extensive experiments show that our proposed method is more effective both in long-tail languages and in high-resource languages. We also demonstrate that our approach is capable of transferring knowledge between domains and languages in zero-shot scenarios.

Viaarxiv icon

AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

Feb 14, 2023
Alexandra Chronopoulou, Matthew E. Peters, Alexander Fraser, Jesse Dodge

Figure 1 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 2 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 3 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models
Figure 4 for AdapterSoup: Weight Averaging to Improve Generalization of Pretrained Language Models

Pretrained language models (PLMs) are trained on massive corpora, but often need to specialize to specific domains. A parameter-efficient adaptation method suggests training an adapter for each domain on the task of language modeling. This leads to good in-domain scores but can be impractical for domain- or resource-restricted settings. A solution is to use a related-domain adapter for the novel domain at test time. In this paper, we introduce AdapterSoup, an approach that performs weight-space averaging of adapters trained on different domains. Our approach is embarrassingly parallel: first, we train a set of domain-specific adapters; then, for each novel domain, we determine which adapters should be averaged at test time. We present extensive experiments showing that AdapterSoup consistently improves performance to new domains without extra training. We also explore weight averaging of adapters trained on the same domain with different hyper-parameters, and show that it preserves the performance of a PLM on new domains while obtaining strong in-domain results. We explore various approaches for choosing which adapters to combine, such as text clustering and semantic similarity. We find that using clustering leads to the most competitive results on novel domains.

* Accepted at EACL 2023; camera-ready version 
Viaarxiv icon

Speaking Multiple Languages Affects the Moral Bias of Language Models

Nov 14, 2022
Katharina Hämmerl, Björn Deiseroth, Patrick Schramowski, Jindřich Libovický, Constantin A. Rothkopf, Alexander Fraser, Kristian Kersting

Figure 1 for Speaking Multiple Languages Affects the Moral Bias of Language Models
Figure 2 for Speaking Multiple Languages Affects the Moral Bias of Language Models
Figure 3 for Speaking Multiple Languages Affects the Moral Bias of Language Models
Figure 4 for Speaking Multiple Languages Affects the Moral Bias of Language Models

Pre-trained multilingual language models (PMLMs) are commonly used when dealing with data from multiple languages and cross-lingual transfer. However, PMLMs are trained on varying amounts of data for each language. In practice this means their performance is often much better on English than many other languages. We explore to what extent this also applies to moral norms. Do the models capture moral norms from English and impose them on other languages? Do the models exhibit random and thus potentially harmful beliefs in certain languages? Both these issues could negatively impact cross-lingual transfer and potentially lead to harmful outcomes. In this paper, we (1) apply the MoralDirection framework to multilingual models, comparing results in German, Czech, Arabic, Mandarin Chinese, and English, (2) analyse model behaviour on filtered parallel subtitles corpora, and (3) apply the models to a Moral Foundations Questionnaire, comparing with human responses from different countries. Our experiments demonstrate that, indeed, PMLMs encode differing moral biases, but these do not necessarily correspond to cultural differences or commonalities in human opinions.

Viaarxiv icon

A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing

Oct 10, 2022
Sophie Henning, William H. Beluch, Alexander Fraser, Annemarie Friedrich

Figure 1 for A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing
Figure 2 for A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing
Figure 3 for A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing
Figure 4 for A Survey of Methods for Addressing Class Imbalance in Deep-Learning Based Natural Language Processing

Many natural language processing (NLP) tasks are naturally imbalanced, as some target categories occur much more frequently than others in the real world. In such scenarios, current NLP models still tend to perform poorly on less frequent classes. Addressing class imbalance in NLP is an active research topic, yet, finding a good approach for a particular task and imbalance scenario is difficult. With this survey, the first overview on class imbalance in deep-learning based NLP, we provide guidance for NLP researchers and practitioners dealing with imbalanced data. We first discuss various types of controlled and real-world class imbalance. Our survey then covers approaches that have been explicitly proposed for class-imbalanced NLP tasks or, originating in the computer vision community, have been evaluated on them. We organize the methods by whether they are based on sampling, data augmentation, choice of loss function, staged learning, or model design. Finally, we discuss open problems such as dealing with multi-label scenarios, and propose systematic benchmarking and reporting in order to move forward on this problem as a community.

Viaarxiv icon

Language-Family Adapters for Multilingual Neural Machine Translation

Sep 30, 2022
Alexandra Chronopoulou, Dario Stojanovski, Alexander Fraser

Figure 1 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 2 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 3 for Language-Family Adapters for Multilingual Neural Machine Translation
Figure 4 for Language-Family Adapters for Multilingual Neural Machine Translation

Massively multilingual models pretrained on abundant corpora with self-supervision achieve state-of-the-art results in a wide range of natural language processing tasks. In machine translation, multilingual pretrained models are often fine-tuned on parallel data from one or multiple language pairs. Multilingual fine-tuning improves performance on medium- and low-resource languages but requires modifying the entire model and can be prohibitively expensive. Training a new set of adapters on each language pair or training a single set of adapters on all language pairs while keeping the pretrained model's parameters frozen has been proposed as a parameter-efficient alternative. However, the former do not permit any sharing between languages, while the latter share parameters for all languages and have to deal with negative interference. In this paper, we propose training language-family adapters on top of a pretrained multilingual model to facilitate cross-lingual transfer. Our model consistently outperforms other adapter-based approaches. We also demonstrate that language-family adapters provide an effective method to translate to languages unseen during pretraining.

Viaarxiv icon