Alert button
Picture for Vlad Niculae

Vlad Niculae

Alert button

Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables

Jul 24, 2023
Ali Araabi, Vlad Niculae, Christof Monz

Figure 1 for Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables
Figure 2 for Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables
Figure 3 for Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables
Figure 4 for Joint Dropout: Improving Generalizability in Low-Resource Neural Machine Translation through Phrase Pair Variables

Despite the tremendous success of Neural Machine Translation (NMT), its performance on low-resource language pairs still remains subpar, partly due to the limited ability to handle previously unseen inputs, i.e., generalization. In this paper, we propose a method called Joint Dropout, that addresses the challenge of low-resource neural machine translation by substituting phrases with variables, resulting in significant enhancement of compositionality, which is a key aspect of generalization. We observe a substantial improvement in translation quality for language pairs with minimal resources, as seen in BLEU and Direct Assessment scores. Furthermore, we conduct an error analysis, and find Joint Dropout to also enhance generalizability of low-resource NMT in terms of robustness and adaptability across different domains

* Accepted at MT Summit 2023 
Viaarxiv icon

Two derivations of Principal Component Analysis on datasets of distributions

Jun 23, 2023
Vlad Niculae

Figure 1 for Two derivations of Principal Component Analysis on datasets of distributions

In this brief note, we formulate Principal Component Analysis (PCA) over datasets consisting not of points but of distributions, characterized by their location and covariance. Just like the usual PCA on points can be equivalently derived via a variance-maximization principle and via a minimization of reconstruction error, we derive a closed-form solution for distributional PCA from both of these perspectives.

* 4 pages, 1 figure 
Viaarxiv icon

Viewing Knowledge Transfer in Multilingual Machine Translation Through a Representational Lens

May 19, 2023
David Stap, Vlad Niculae, Christof Monz

Figure 1 for Viewing Knowledge Transfer in Multilingual Machine Translation Through a Representational Lens
Figure 2 for Viewing Knowledge Transfer in Multilingual Machine Translation Through a Representational Lens
Figure 3 for Viewing Knowledge Transfer in Multilingual Machine Translation Through a Representational Lens
Figure 4 for Viewing Knowledge Transfer in Multilingual Machine Translation Through a Representational Lens

We argue that translation quality alone is not a sufficient metric for measuring knowledge transfer in multilingual neural machine translation. To support this claim, we introduce Representational Transfer Potential (RTP), which measures representational similarities between languages. We show that RTP can measure both positive and negative transfer (interference), and find that RTP is strongly correlated with changes in translation quality, indicating that transfer does occur. Furthermore, we investigate data and language characteristics that are relevant for transfer, and find that multi-parallel overlap is an important yet under-explored feature. Based on this, we develop a novel training scheme, which uses an auxiliary similarity loss that encourages representations to be more invariant across languages by taking advantage of multi-parallel data. We show that our method yields increased translation quality for low- and mid-resource languages across multiple data and model setups.

Viaarxiv icon

DAG Learning on the Permutahedron

Feb 10, 2023
Valentina Zantedeschi, Luca Franceschi, Jean Kaddour, Matt J. Kusner, Vlad Niculae

Figure 1 for DAG Learning on the Permutahedron
Figure 2 for DAG Learning on the Permutahedron
Figure 3 for DAG Learning on the Permutahedron
Figure 4 for DAG Learning on the Permutahedron

We propose a continuous optimization framework for discovering a latent directed acyclic graph (DAG) from observational data. Our approach optimizes over the polytope of permutation vectors, the so-called Permutahedron, to learn a topological ordering. Edges can be optimized jointly, or learned conditional on the ordering via a non-differentiable subroutine. Compared to existing continuous optimization approaches our formulation has a number of advantages including: 1. validity: optimizes over exact DAGs as opposed to other relaxations optimizing approximate DAGs; 2. modularity: accommodates any edge-optimization procedure, edge structural parameterization, and optimization loss; 3. end-to-end: either alternately iterates between node-ordering and edge-optimization, or optimizes them jointly. We demonstrate, on real-world data problems in protein-signaling and transcriptional network discovery, that our approach lies on the Pareto frontier of two key metrics, the SID and SHD.

* The Eleventh International Conference on Learning Representations 
Viaarxiv icon

Discrete Latent Structure in Neural Networks

Jan 18, 2023
Vlad Niculae, Caio F. Corro, Nikita Nangia, Tsvetomila Mihaylova, André F. T. Martins

Figure 1 for Discrete Latent Structure in Neural Networks
Figure 2 for Discrete Latent Structure in Neural Networks
Figure 3 for Discrete Latent Structure in Neural Networks
Figure 4 for Discrete Latent Structure in Neural Networks

Many types of data from fields including natural language processing, computer vision, and bioinformatics, are well represented by discrete, compositional structures such as trees, sequences, or matchings. Latent structure models are a powerful tool for learning to extract such representations, offering a way to incorporate structural bias, discover insight about the data, and interpret decisions. However, effective training is challenging, as neural networks are typically designed for continuous computation. This text explores three broad strategies for learning with discrete latent structure: continuous relaxation, surrogate gradients, and probabilistic estimation. Our presentation relies on consistent notations for a wide range of models. As such, we reveal many new connections between latent structure learning strategies, showing how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.

Viaarxiv icon

How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?

Aug 17, 2022
Ali Araabi, Christof Monz, Vlad Niculae

Figure 1 for How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?
Figure 2 for How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?
Figure 3 for How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?
Figure 4 for How Effective is Byte Pair Encoding for Out-Of-Vocabulary Words in Neural Machine Translation?

Neural Machine Translation (NMT) is an open vocabulary problem. As a result, dealing with the words not occurring during training (a.k.a. out-of-vocabulary (OOV) words) have long been a fundamental challenge for NMT systems. The predominant method to tackle this problem is Byte Pair Encoding (BPE) which splits words, including OOV words, into sub-word segments. BPE has achieved impressive results for a wide range of translation tasks in terms of automatic evaluation metrics. While it is often assumed that by using BPE, NMT systems are capable of handling OOV words, the effectiveness of BPE in translating OOV words has not been explicitly measured. In this paper, we study to what extent BPE is successful in translating OOV words at the word-level. We analyze the translation quality of OOV words based on word type, number of segments, cross-attention weights, and the frequency of segment n-grams in the training data. Our experiments show that while careful BPE settings seem to be fairly useful in translating OOV words across datasets, a considerable percentage of OOV words are translated incorrectly. Furthermore, we highlight the slightly higher effectiveness of BPE in translating OOV words for special cases, such as named-entities and when the languages involved are linguistically close to each other.

* 14 pages, 6 figures, 1 table, To be published in AMTA 2022 conference 
Viaarxiv icon

Modeling Structure with Undirected Neural Networks

Feb 08, 2022
Tsvetomila Mihaylova, Vlad Niculae, André F. T. Martins

Figure 1 for Modeling Structure with Undirected Neural Networks
Figure 2 for Modeling Structure with Undirected Neural Networks
Figure 3 for Modeling Structure with Undirected Neural Networks
Figure 4 for Modeling Structure with Undirected Neural Networks

Neural networks are powerful function estimators, leading to their status as a paradigm of choice for modeling structured data. However, unlike other structured representations that emphasize the modularity of the problem -- e.g., factor graphs -- neural networks are usually monolithic mappings from inputs to outputs, with a fixed computation order. This limitation prevents them from capturing different directions of computation and interaction between the modeled variables. In this paper, we combine the representational strengths of factor graphs and of neural networks, proposing undirected neural networks (UNNs): a flexible framework for specifying computations that can be performed in any order. For particular choices, our proposed models subsume and extend many existing architectures: feed-forward, recurrent, self-attention networks, auto-encoders, and networks with implicit layers. We demonstrate the effectiveness of undirected neural architectures, both unstructured and structured, on a range of tasks: tree-constrained dependency parsing, convolutional image classification, and sequence completion with attention. By varying the computation order, we show how a single UNN can be used both as a classifier and a prototype generator, and how it can fill in missing parts of an input sequence, making them a promising field for further research.

Viaarxiv icon

Sparse Communication via Mixed Distributions

Aug 05, 2021
António Farinhas, Wilker Aziz, Vlad Niculae, André F. T. Martins

Figure 1 for Sparse Communication via Mixed Distributions
Figure 2 for Sparse Communication via Mixed Distributions
Figure 3 for Sparse Communication via Mixed Distributions
Figure 4 for Sparse Communication via Mixed Distributions

Neural networks and other machine learning models compute continuous representations, while humans communicate mostly through discrete symbols. Reconciling these two forms of communication is desirable for generating human-readable interpretations or learning discrete latent variable models, while maintaining end-to-end differentiability. Some existing approaches (such as the Gumbel-Softmax transformation) build continuous relaxations that are discrete approximations in the zero-temperature limit, while others (such as sparsemax transformations and the Hard Concrete distribution) produce discrete/continuous hybrids. In this paper, we build rigorous theoretical foundations for these hybrids, which we call "mixed random variables." Our starting point is a new "direct sum" base measure defined on the face lattice of the probability simplex. From this measure, we introduce new entropy and Kullback-Leibler divergence functions that subsume the discrete and differential cases and have interpretations in terms of code optimality. Our framework suggests two strategies for representing and sampling mixed random variables, an extrinsic ("sample-and-project") and an intrinsic one (based on face stratification). We experiment with both approaches on an emergent communication benchmark and on modeling MNIST and Fashion-MNIST data with variational auto-encoders with mixed latent variables.

Viaarxiv icon