Alert button
Picture for Andy Way

Andy Way

Alert button

Adaptive Machine Translation with Large Language Models

Jan 30, 2023
Yasmin Moslem, Rejwanul Haque, Andy Way

Figure 1 for Adaptive Machine Translation with Large Language Models
Figure 2 for Adaptive Machine Translation with Large Language Models
Figure 3 for Adaptive Machine Translation with Large Language Models
Figure 4 for Adaptive Machine Translation with Large Language Models

Consistency is a key requirement of high-quality translation. It is especially important to adhere to pre-approved terminology and corrected translations in domain-specific projects. Machine translation (MT) has achieved significant progress in the area of domain adaptation. However, real-time adaptation remains challenging. Large-scale language models (LLMs) have recently shown interesting capabilities of in-context learning, where they learn to replicate certain input-output text generation patterns, without further fine-tuning. By feeding an LLM with a prompt that consists of a list of translation pairs, it can then simulate the domain and style characteristics at inference time. This work aims to investigate how we can utilize in-context learning to improve real-time adaptive MT. Our extensive experiments show promising results at translation time. For example, GPT-3.5 can adapt to a set of in-domain sentence pairs and/or terminology while translating a new sentence. We observe that the translation quality with few-shot in-context learning can surpass that of strong encoder-decoder MT systems, especially for high-resource languages. Moreover, we investigate whether we can combine MT from strong encoder-decoder models with fuzzy matches, which can further improve the translation, especially for less supported languages. We conduct our experiments across five diverse languages, namely English-to-Arabic (EN-AR), English-to-Chinese (EN-ZH), English-to-French (EN-FR), English-to-Kinyarwanda (EN-RW), and English-to-Spanish (EN-ES) language pairs.

Viaarxiv icon

Translation Word-Level Auto-Completion: What can we achieve out of the box?

Oct 23, 2022
Yasmin Moslem, Rejwanul Haque, Andy Way

Figure 1 for Translation Word-Level Auto-Completion: What can we achieve out of the box?
Figure 2 for Translation Word-Level Auto-Completion: What can we achieve out of the box?
Figure 3 for Translation Word-Level Auto-Completion: What can we achieve out of the box?
Figure 4 for Translation Word-Level Auto-Completion: What can we achieve out of the box?

Research on Machine Translation (MT) has achieved important breakthroughs in several areas. While there is much more to be done in order to build on this success, we believe that the language industry needs better ways to take full advantage of current achievements. Due to a combination of factors, including time, resources, and skills, businesses tend to apply pragmatism into their AI workflows. Hence, they concentrate more on outcomes, e.g. delivery, shipping, releases, and features, and adopt high-level working production solutions, where possible. Among the features thought to be helpful for translators are sentence-level and word-level translation auto-suggestion and auto-completion. Suggesting alternatives can inspire translators and limit their need to refer to external resources, which hopefully boosts their productivity. This work describes our submissions to WMT's shared task on word-level auto-completion, for the Chinese-to-English, English-to-Chinese, German-to-English, and English-to-German language directions. We investigate the possibility of using pre-trained models and out-of-the-box features from available libraries. We employ random sampling to generate diverse alternatives, which reveals good results. Furthermore, we introduce our open-source API, based on CTranslate2, to serve translations, auto-suggestions, and auto-completions.

* In Proceedings of the Seventh Conference on Machine Translation, 2022, Abu Dhabi, UAE. Association for Computational Linguistics  
* WMT 2022 
Viaarxiv icon

Domain-Specific Text Generation for Machine Translation

Aug 11, 2022
Yasmin Moslem, Rejwanul Haque, John D. Kelleher, Andy Way

Figure 1 for Domain-Specific Text Generation for Machine Translation
Figure 2 for Domain-Specific Text Generation for Machine Translation
Figure 3 for Domain-Specific Text Generation for Machine Translation
Figure 4 for Domain-Specific Text Generation for Machine Translation

Preservation of domain knowledge from the source to target is crucial in any translation workflow. It is common in the translation industry to receive highly specialized projects, where there is hardly any parallel in-domain data. In such scenarios where there is insufficient in-domain data to fine-tune Machine Translation (MT) models, producing translations that are consistent with the relevant context is challenging. In this work, we propose a novel approach to domain adaptation leveraging state-of-the-art pretrained language models (LMs) for domain-specific data augmentation for MT, simulating the domain characteristics of either (a) a small bilingual dataset, or (b) the monolingual source text to be translated. Combining this idea with back-translation, we can generate huge amounts of synthetic bilingual in-domain data for both use cases. For our investigation, we use the state-of-the-art Transformer architecture. We employ mixed fine-tuning to train models that significantly improve translation of in-domain texts. More specifically, in both scenarios, our proposed methods achieve improvements of approximately 5-6 BLEU and 2-3 BLEU, respectively, on the Arabic-to-English and English-to-Arabic language pairs. Furthermore, the outcome of human evaluation corroborates the automatic evaluation results.

* AMTA 2022 - MT Research Track 
Viaarxiv icon

Dependency Graph-to-String Statistical Machine Translation

Mar 20, 2021
Liangyou Li, Andy Way, Qun Liu

Figure 1 for Dependency Graph-to-String Statistical Machine Translation
Figure 2 for Dependency Graph-to-String Statistical Machine Translation
Figure 3 for Dependency Graph-to-String Statistical Machine Translation
Figure 4 for Dependency Graph-to-String Statistical Machine Translation

We present graph-based translation models which translate source graphs into target strings. Source graphs are constructed from dependency trees with extra links so that non-syntactic phrases are connected. Inspired by phrase-based models, we first introduce a translation model which segments a graph into a sequence of disjoint subgraphs and generates a translation by combining subgraph translations left-to-right using beam search. However, similar to phrase-based models, this model is weak at phrase reordering. Therefore, we further introduce a model based on a synchronous node replacement grammar which learns recursive translation rules. We provide two implementations of the model with different restrictions so that source graphs can be parsed efficiently. Experiments on Chinese--English and German--English show that our graph-based models are significantly better than corresponding sequence- and tree-based baselines.

Viaarxiv icon

Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality

Nov 28, 2020
Alberto Poncelas, Jan Buts, James Hadley, Andy Way

Figure 1 for Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality
Figure 2 for Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality
Figure 3 for Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality
Figure 4 for Using Multiple Subwords to Improve English-Esperanto Automated Literary Translation Quality

Building Machine Translation (MT) systems for low-resource languages remains challenging. For many language pairs, parallel data are not widely available, and in such cases MT models do not achieve results comparable to those seen with high-resource languages. When data are scarce, it is of paramount importance to make optimal use of the limited material available. To that end, in this paper we propose employing the same parallel sentences multiple times, only changing the way the words are split each time. For this purpose we use several Byte Pair Encoding models, with various merge operations used in their configuration. In our experiments, we use this technique to expand the available data and improve an MT system involving a low-resource language pair, namely English-Esperanto. As an additional contribution, we made available a set of English-Esperanto parallel data in the literary domain.

* The 3rd Workshop on Technologies for MT of Low Resource Languages (LoResMT 2020)  
Viaarxiv icon

The Impact of Indirect Machine Translation on Sentiment Classification

Aug 25, 2020
Alberto Poncelas, Pintu Lohar, Andy Way, James Hadley

Figure 1 for The Impact of Indirect Machine Translation on Sentiment Classification
Figure 2 for The Impact of Indirect Machine Translation on Sentiment Classification
Figure 3 for The Impact of Indirect Machine Translation on Sentiment Classification
Figure 4 for The Impact of Indirect Machine Translation on Sentiment Classification

Sentiment classification has been crucial for many natural language processing (NLP) applications, such as the analysis of movie reviews, tweets, or customer feedback. A sufficiently large amount of data is required to build a robust sentiment classification system. However, such resources are not always available for all domains or for all languages. In this work, we propose employing a machine translation (MT) system to translate customer feedback into another language to investigate in which cases translated sentences can have a positive or negative impact on an automatic sentiment classifier. Furthermore, as performing a direct translation is not always possible, we explore the performance of automatic classifiers on sentences that have been translated using a pivot MT system. We conduct several experiments using the above approaches to analyse the performance of our proposed sentiment classification system and discuss the advantages and drawbacks of classifying translated sentences.

* Proceedings of Association for Machine Translation in the Americas, AMTA (2020)  
Viaarxiv icon

Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation

May 01, 2020
Xabier Soto, Dimitar Shterionov, Alberto Poncelas, Andy Way

Figure 1 for Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation
Figure 2 for Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation
Figure 3 for Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation
Figure 4 for Selecting Backtranslated Data from Multiple Sources for Improved Neural Machine Translation

Machine translation (MT) has benefited from using synthetic training data originating from translating monolingual corpora, a technique known as backtranslation. Combining backtranslated data from different sources has led to better results than when using such data in isolation. In this work we analyse the impact that data translated with rule-based, phrase-based statistical and neural MT systems has on new MT systems. We use a real-world low-resource use-case (Basque-to-Spanish in the clinical domain) as well as a high-resource language pair (German-to-English) to test different scenarios with backtranslation and employ data selection to optimise the synthetic corpora. We exploit different data selection strategies in order to reduce the amount of data used, while at the same time maintaining high-quality MT systems. We further tune the data selection method by taking into account the quality of the MT systems used for backtranslation and lexical diversity of the resulting corpora. Our experiments show that incorporating backtranslated data from different sources can be beneficial, and that availing of data selection can yield improved performance.

* Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL (2020)  
Viaarxiv icon

Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation

May 01, 2020
Andy Way, Rejwanul Haque, Guodong Xie, Federico Gaspari, Maja Popovic, Alberto Poncelas

Figure 1 for Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation
Figure 2 for Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation
Figure 3 for Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation
Figure 4 for Facilitating Access to Multilingual COVID-19 Information via Neural Machine Translation

Every day, more people are becoming infected and dying from exposure to COVID-19. Some countries in Europe like Spain, France, the UK and Italy have suffered particularly badly from the virus. Others such as Germany appear to have coped extremely well. Both health professionals and the general public are keen to receive up-to-date information on the effects of the virus, as well as treatments that have proven to be effective. In cases where language is a barrier to access of pertinent information, machine translation (MT) may help people assimilate information published in different languages. Our MT systems trained on COVID-19 data are freely available for anyone to use to help translate information published in German, French, Italian, Spanish into English, as well as the reverse direction.

Viaarxiv icon

Multiple Segmentations of Thai Sentences for Neural Machine Translation

Apr 23, 2020
Alberto Poncelas, Wichaya Pidchamook, Chao-Hong Liu, James Hadley, Andy Way

Figure 1 for Multiple Segmentations of Thai Sentences for Neural Machine Translation
Figure 2 for Multiple Segmentations of Thai Sentences for Neural Machine Translation
Figure 3 for Multiple Segmentations of Thai Sentences for Neural Machine Translation
Figure 4 for Multiple Segmentations of Thai Sentences for Neural Machine Translation

Thai is a low-resource language, so it is often the case that data is not available in sufficient quantities to train an Neural Machine Translation (NMT) model which perform to a high level of quality. In addition, the Thai script does not use white spaces to delimit the boundaries between words, which adds more complexity when building sequence to sequence models. In this work, we explore how to augment a set of English--Thai parallel data by replicating sentence-pairs with different word segmentation methods on Thai, as training data for NMT model training. Using different merge operations of Byte Pair Encoding, different segmentations of Thai sentences can be obtained. The experiments show that combining these datasets, performance is improved for NMT models trained with a dataset that has been split using a supervised splitting tool.

* Spoken Language Technologies for Under-resourced languages and CCURL Collaboration and Computing for Under-Resourced Languages Workshop, SLTU-CCURL (2020)  
Viaarxiv icon

A Tool for Facilitating OCR Postediting in Historical Documents

Apr 23, 2020
Alberto Poncelas, Mohammad Aboomar, Jan Buts, James Hadley, Andy Way

Figure 1 for A Tool for Facilitating OCR Postediting in Historical Documents
Figure 2 for A Tool for Facilitating OCR Postediting in Historical Documents

Optical character recognition (OCR) for historical documents is a complex procedure subject to a unique set of material issues, including inconsistencies in typefaces and low quality scanning. Consequently, even the most sophisticated OCR engines produce errors. This paper reports on a tool built for postediting the output of Tesseract, more specifically for correcting common errors in digitized historical documents. The proposed tool suggests alternatives for word forms not found in a specified vocabulary. The assumed error is replaced by a presumably correct alternative in the post-edition based on the scores of a Language Model (LM). The tool is tested on a chapter of the book An Essay Towards Regulating the Trade and Employing the Poor of this Kingdom (Cary ,1719). As demonstrated below, the tool is successful in correcting a number of common errors. If sometimes unreliable, it is also transparent and subject to human intervention.

* Workshop on Language Technologies for Historical and Ancient Languages, LT4HALA (2020)  
Viaarxiv icon