Alert button
Picture for Youssef Mroueh

Youssef Mroueh

Alert button

Tabular Transformers for Modeling Multivariate Time Series

Nov 03, 2020
Inkit Padhi, Yair Schiff, Igor Melnyk, Mattia Rigotti, Youssef Mroueh, Pierre Dognin, Jerret Ross, Ravi Nair, Erik Altman

Figure 1 for Tabular Transformers for Modeling Multivariate Time Series
Figure 2 for Tabular Transformers for Modeling Multivariate Time Series
Figure 3 for Tabular Transformers for Modeling Multivariate Time Series
Figure 4 for Tabular Transformers for Modeling Multivariate Time Series

Tabular datasets are ubiquitous in data science applications. Given their importance, it seems natural to apply state-of-the-art deep learning algorithms in order to fully unlock their potential. Here we propose neural network models that represent tabular time series that can optionally leverage their hierarchical structure. This results in two architectures for tabular time series: one for learning representations that is analogous to BERT and can be pre-trained end-to-end and used in downstream tasks, and one that is akin to GPT and can be used for generation of realistic synthetic tabular sequences. We demonstrate our models on two datasets: a synthetic credit card transaction dataset, where the learned representations are used for fraud detection and synthetic data generation, and on a real pollution dataset, where the learned encodings are used to predict atmospheric pollutant concentrations. Code and data are available at https://github.com/IBM/TabFormer.

* Submitted to ICASSP, 2021; https://github.com/IBM/TabFormer 
Viaarxiv icon

Unbalanced Sobolev Descent

Sep 29, 2020
Youssef Mroueh, Mattia Rigotti

Figure 1 for Unbalanced Sobolev Descent
Figure 2 for Unbalanced Sobolev Descent
Figure 3 for Unbalanced Sobolev Descent
Figure 4 for Unbalanced Sobolev Descent

We introduce Unbalanced Sobolev Descent (USD), a particle descent algorithm for transporting a high dimensional source distribution to a target distribution that does not necessarily have the same mass. We define the Sobolev-Fisher discrepancy between distributions and show that it relates to advection-reaction transport equations and the Wasserstein-Fisher-Rao metric between distributions. USD transports particles along gradient flows of the witness function of the Sobolev-Fisher discrepancy (advection step) and reweighs the mass of particles with respect to this witness function (reaction step). The reaction step can be thought of as a birth-death process of the particles with rate of growth proportional to the witness function. When the Sobolev-Fisher witness function is estimated in a Reproducing Kernel Hilbert Space (RKHS), under mild assumptions we show that USD converges asymptotically (in the limit of infinite particles) to the target distribution in the Maximum Mean Discrepancy (MMD) sense. We then give two methods to estimate the Sobolev-Fisher witness with neural networks, resulting in two Neural USD algorithms. The first one implements the reaction step with mirror descent on the weights, while the second implements it through a birth-death process of particles. We show on synthetic examples that USD transports distributions with or without conservation of mass faster than previous particle descent algorithms, and finally demonstrate its use for molecular biology analyses where our method is naturally suited to match developmental stages of populations of differentiating cells based on their single-cell RNA sequencing profile. Code is available at https://github.com/ibm/usd .

* NeurIPS 2020 
Viaarxiv icon

Active learning of deep surrogates for PDEs: Application to metasurface design

Aug 24, 2020
Raphaël Pestourie, Youssef Mroueh, Thanh V. Nguyen, Payel Das, Steven G. Johnson

Figure 1 for Active learning of deep surrogates for PDEs: Application to metasurface design
Figure 2 for Active learning of deep surrogates for PDEs: Application to metasurface design
Figure 3 for Active learning of deep surrogates for PDEs: Application to metasurface design
Figure 4 for Active learning of deep surrogates for PDEs: Application to metasurface design

Surrogate models for partial-differential equations are widely used in the design of meta-materials to rapidly evaluate the behavior of composable components. However, the training cost of accurate surrogates by machine learning can rapidly increase with the number of variables. For photonic-device models, we find that this training becomes especially challenging as design regions grow larger than the optical wavelength. We present an active learning algorithm that reduces the number of training points by more than an order of magnitude for a neural-network surrogate model of optical-surface components compared to random samples. Results show that the surrogate evaluation is over two orders of magnitude faster than a direct solve, and we demonstrate how this can be exploited to accelerate large-scale engineering optimization.

* submitted to npj 
Viaarxiv icon

Kernel Stein Generative Modeling

Jul 06, 2020
Wei-Cheng Chang, Chun-Liang Li, Youssef Mroueh, Yiming Yang

Figure 1 for Kernel Stein Generative Modeling
Figure 2 for Kernel Stein Generative Modeling
Figure 3 for Kernel Stein Generative Modeling
Figure 4 for Kernel Stein Generative Modeling

We are interested in gradient-based Explicit Generative Modeling where samples can be derived from iterative gradient updates based on an estimate of the score function of the data distribution. Recent advances in Stochastic Gradient Langevin Dynamics (SGLD) demonstrates impressive results with energy-based models on high-dimensional and complex data distributions. Stein Variational Gradient Descent (SVGD) is a deterministic sampling algorithm that iteratively transports a set of particles to approximate a given distribution, based on functional gradient descent that decreases the KL divergence. SVGD has promising results on several Bayesian inference applications. However, applying SVGD on high dimensional problems is still under-explored. The goal of this work is to study high dimensional inference with SVGD. We first identify key challenges in practical kernel SVGD inference in high-dimension. We propose noise conditional kernel SVGD (NCK-SVGD), that works in tandem with the recently introduced Noise Conditional Score Network estimator. NCK is crucial for successful inference with SVGD in high dimension, as it adapts the kernel to the noise level of the score estimate. As we anneal the noise, NCK-SVGD targets the real data distribution. We then extend the annealed SVGD with an entropic regularization. We show that this offers a flexible control between sample quality and diversity, and verify it empirically by precision and recall evaluations. The NCK-SVGD produces samples comparable to GANs and annealed SGLD on computer vision benchmarks, including MNIST and CIFAR-10.

Viaarxiv icon

Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis

Jun 22, 2020
Adam Block, Youssef Mroueh, Alexander Rakhlin, Jerret Ross

Figure 1 for Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis
Figure 2 for Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis
Figure 3 for Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis
Figure 4 for Fast Mixing of Multi-Scale Langevin Dynamics under the Manifold Hypothesis

Recently, the task of image generation has attracted much attention. In particular, the recent empirical successes of the Markov Chain Monte Carlo (MCMC) technique of Langevin Dynamics have prompted a number of theoretical advances; despite this, several outstanding problems remain. First, the Langevin Dynamics is run in very high dimension on a nonconvex landscape; in the worst case, due to the NP-hardness of nonconvex optimization, it is thought that Langevin Dynamics mixes only in time exponential in the dimension. In this work, we demonstrate how the manifold hypothesis allows for the considerable reduction of mixing time, from exponential in the ambient dimension to depending only on the (much smaller) intrinsic dimension of the data. Second, the high dimension of the sampling space significantly hurts the performance of Langevin Dynamics; we leverage a multi-scale approach to help ameliorate this issue and observe that this multi-resolution algorithm allows for a trade-off between image quality and computational expense in generation.

Viaarxiv icon

Learning Implicit Text Generation via Feature Matching

May 09, 2020
Inkit Padhi, Pierre Dognin, Ke Bai, Cicero Nogueira dos Santos, Vijil Chenthamarakshan, Youssef Mroueh, Payel Das

Figure 1 for Learning Implicit Text Generation via Feature Matching
Figure 2 for Learning Implicit Text Generation via Feature Matching
Figure 3 for Learning Implicit Text Generation via Feature Matching
Figure 4 for Learning Implicit Text Generation via Feature Matching

Generative feature matching network (GFMN) is an approach for training implicit generative models for images by performing moment matching on features from pre-trained neural networks. In this paper, we present new GFMN formulations that are effective for sequential data. Our experimental results show the effectiveness of the proposed method, SeqGFMN, for three distinct generation tasks in English: unconditional text generation, class-conditional text generation, and unsupervised text style transfer. SeqGFMN is stable to train and outperforms various adversarial approaches for text generation and text style transfer.

* ACL 2020 
Viaarxiv icon

Generative Modeling with Denoising Auto-Encoders and Langevin Sampling

Feb 17, 2020
Adam Block, Youssef Mroueh, Alexander Rakhlin

We study convergence of a generative modeling method that first estimates the score function of the distribution using Denoising Auto-Encoders (DAE) or Denoising Score Matching (DSM) and then employs Langevin diffusion for sampling. We show that both DAE and DSM provide estimates of the score of the Gaussian smoothed population density, allowing us to apply the machinery of Empirical Processes. We overcome the challenge of relying only on $L^2$ bounds on the score estimation error and provide finite-sample bounds in the Wasserstein distance between the law of the population distribution and the law of this sampling scheme. We then apply our results to the homotopy method of arXiv:1907.05600 and provide theoretical justification for its empirical success.

* 22 pages 
Viaarxiv icon

Improving Efficiency in Large-Scale Decentralized Distributed Training

Feb 04, 2020
Wei Zhang, Xiaodong Cui, Abdullah Kayi, Mingrui Liu, Ulrich Finkler, Brian Kingsbury, George Saon, Youssef Mroueh, Alper Buyuktosunoglu, Payel Das, David Kung, Michael Picheny

Figure 1 for Improving Efficiency in Large-Scale Decentralized Distributed Training
Figure 2 for Improving Efficiency in Large-Scale Decentralized Distributed Training
Figure 3 for Improving Efficiency in Large-Scale Decentralized Distributed Training
Figure 4 for Improving Efficiency in Large-Scale Decentralized Distributed Training

Decentralized Parallel SGD (D-PSGD) and its asynchronous variant Asynchronous Parallel SGD (AD-PSGD) is a family of distributed learning algorithms that have been demonstrated to perform well for large-scale deep learning tasks. One drawback of (A)D-PSGD is that the spectral gap of the mixing matrix decreases when the number of learners in the system increases, which hampers convergence. In this paper, we investigate techniques to accelerate (A)D-PSGD based training by improving the spectral gap while minimizing the communication cost. We demonstrate the effectiveness of our proposed techniques by running experiments on the 2000-hour Switchboard speech recognition task and the ImageNet computer vision task. On an IBM P9 supercomputer, our system is able to train an LSTM acoustic model in 2.28 hours with 7.5% WER on the Hub5-2000 Switchboard (SWB) test set and 13.3% WER on the CallHome (CH) test set using 64 V100 GPUs and in 1.98 hours with 7.7% WER on SWB and 13.3% WER on CH using 128 V100 GPUs, the fastest training time reported to date.

* 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP'2020) Oral  
Viaarxiv icon