Alert button
Picture for Brian Karrer

Brian Karrer

Alert button

Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning

Sep 05, 2023
Lili Yu, Bowen Shi, Ramakanth Pasunuru, Benjamin Muller, Olga Golovneva, Tianlu Wang, Arun Babu, Binh Tang, Brian Karrer, Shelly Sheynin, Candace Ross, Adam Polyak, Russell Howes, Vasu Sharma, Puxin Xu, Hovhannes Tamoyan, Oron Ashual, Uriel Singer, Shang-Wen Li, Susan Zhang, Richard James, Gargi Ghosh, Yaniv Taigman, Maryam Fazel-Zarandi, Asli Celikyilmaz, Luke Zettlemoyer, Armen Aghajanyan

Figure 1 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 2 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 3 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning
Figure 4 for Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning

We present CM3Leon (pronounced "Chameleon"), a retrieval-augmented, token-based, decoder-only multi-modal language model capable of generating and infilling both text and images. CM3Leon uses the CM3 multi-modal architecture but additionally shows the extreme benefits of scaling up and tuning on more diverse instruction-style data. It is the first multi-modal model trained with a recipe adapted from text-only language models, including a large-scale retrieval-augmented pre-training stage and a second multi-task supervised fine-tuning (SFT) stage. It is also a general-purpose model that can do both text-to-image and image-to-text generation, allowing us to introduce self-contained contrastive decoding methods that produce high-quality outputs. Extensive experiments demonstrate that this recipe is highly effective for multi-modal models. CM3Leon achieves state-of-the-art performance in text-to-image generation with 5x less training compute than comparable methods (zero-shot MS-COCO FID of 4.88). After SFT, CM3Leon can also demonstrate unprecedented levels of controllability in tasks ranging from language-guided image editing to image-controlled generation and segmentation.

Viaarxiv icon

Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale

Jun 23, 2023
Matthew Le, Apoorv Vyas, Bowen Shi, Brian Karrer, Leda Sari, Rashel Moritz, Mary Williamson, Vimal Manohar, Yossi Adi, Jay Mahadeokar, Wei-Ning Hsu

Figure 1 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 2 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 3 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale
Figure 4 for Voicebox: Text-Guided Multilingual Universal Speech Generation at Scale

Large-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision research. These models not only generate high fidelity text or image outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still primitive in terms of scale and task generalization. In this paper, we present Voicebox, the most versatile text-guided generative model for speech at scale. Voicebox is a non-autoregressive flow-matching model trained to infill speech, given audio context and text, trained on over 50K hours of speech that are neither filtered nor enhanced. Similar to GPT, Voicebox can perform many different tasks through in-context learning, but is more flexible as it can also condition on future context. Voicebox can be used for mono or cross-lingual zero-shot text-to-speech synthesis, noise removal, content editing, style conversion, and diverse sample generation. In particular, Voicebox outperforms the state-of-the-art zero-shot TTS model VALL-E on both intelligibility (5.9% vs 1.9% word error rates) and audio similarity (0.580 vs 0.681) while being up to 20 times faster. See voicebox.metademolab.com for a demo of the model.

Viaarxiv icon

Privately generating tabular data using language models

Jun 07, 2023
Alexandre Sablayrolles, Yue Wang, Brian Karrer

Figure 1 for Privately generating tabular data using language models
Figure 2 for Privately generating tabular data using language models
Figure 3 for Privately generating tabular data using language models
Figure 4 for Privately generating tabular data using language models

Privately generating synthetic data from a table is an important brick of a privacy-first world. We propose and investigate a simple approach of treating each row in a table as a sentence and training a language model with differential privacy. We show this approach obtains competitive results in modelling tabular data across multiple datasets, even at small scales that favor alternative methods based on marginal distributions.

* 9 pages, 3 figures 
Viaarxiv icon

Bounding Training Data Reconstruction in Private (Deep) Learning

Jan 28, 2022
Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten

Figure 1 for Bounding Training Data Reconstruction in Private (Deep) Learning
Figure 2 for Bounding Training Data Reconstruction in Private (Deep) Learning
Figure 3 for Bounding Training Data Reconstruction in Private (Deep) Learning
Figure 4 for Bounding Training Data Reconstruction in Private (Deep) Learning

Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods -- Renyi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.

Viaarxiv icon

Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

Jun 29, 2020
Shali Jiang, Daniel R. Jiang, Maximilian Balandat, Brian Karrer, Jacob R. Gardner, Roman Garnett

Figure 1 for Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees
Figure 2 for Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees
Figure 3 for Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees
Figure 4 for Efficient Nonmyopic Bayesian Optimization via One-Shot Multi-Step Trees

Bayesian optimization is a sequential decision making framework for optimizing expensive-to-evaluate black-box functions. Computing a full lookahead policy amounts to solving a highly intractable stochastic dynamic program. Myopic approaches, such as expected improvement, are often adopted in practice, but they ignore the long-term impact of the immediate decision. Existing nonmyopic approaches are mostly heuristic and/or computationally expensive. In this paper, we provide the first efficient implementation of general multi-step lookahead Bayesian optimization, formulated as a sequence of nested optimization problems within a multi-step scenario tree. Instead of solving these problems in a nested way, we equivalently optimize all decision variables in the full tree jointly, in a ``one-shot'' fashion. Combining this with an efficient method for implementing multi-step Gaussian process ``fantasization,'' we demonstrate that multi-step expected improvement is computationally tractable and exhibits performance superior to existing methods on a wide range of benchmarks.

Viaarxiv icon

BoTorch: Programmable Bayesian Optimization in PyTorch

Oct 14, 2019
Maximilian Balandat, Brian Karrer, Daniel R. Jiang, Samuel Daulton, Benjamin Letham, Andrew Gordon Wilson, Eytan Bakshy

Figure 1 for BoTorch: Programmable Bayesian Optimization in PyTorch
Figure 2 for BoTorch: Programmable Bayesian Optimization in PyTorch
Figure 3 for BoTorch: Programmable Bayesian Optimization in PyTorch
Figure 4 for BoTorch: Programmable Bayesian Optimization in PyTorch

Bayesian optimization provides sample-efficient global optimization for a broad range of applications, including automatic machine learning, molecular chemistry, and experimental design. We introduce BoTorch, a modern programming framework for Bayesian optimization. Enabled by Monte-Carlo (MC) acquisition functions and auto-differentiation, BoTorch's modular design facilitates flexible specification and optimization of probabilistic models written in PyTorch, radically simplifying implementation of novel acquisition functions. Our MC approach is made practical by a distinctive algorithmic foundation that leverages fast predictive distributions and hardware acceleration. In experiments, we demonstrate the improved sample efficiency of BoTorch relative to other popular libraries. BoTorch is open source and available at https://github.com/pytorch/botorch.

Viaarxiv icon

Constrained Bayesian Optimization with Noisy Experiments

Jun 26, 2018
Benjamin Letham, Brian Karrer, Guilherme Ottoni, Eytan Bakshy

Figure 1 for Constrained Bayesian Optimization with Noisy Experiments
Figure 2 for Constrained Bayesian Optimization with Noisy Experiments
Figure 3 for Constrained Bayesian Optimization with Noisy Experiments
Figure 4 for Constrained Bayesian Optimization with Noisy Experiments

Randomized experiments are the gold standard for evaluating the effects of changes to real-world systems. Data in these tests may be difficult to collect and outcomes may have high variance, resulting in potentially large measurement error. Bayesian optimization is a promising technique for efficiently optimizing multiple continuous parameters, but existing approaches degrade in performance when the noise level is high, limiting its applicability to many randomized experiments. We derive an expression for expected improvement under greedy batch optimization with noisy observations and noisy constraints, and develop a quasi-Monte Carlo approximation that allows it to be efficiently optimized. Simulations with synthetic functions show that optimization performance on noisy, constrained problems outperforms existing methods. We further demonstrate the effectiveness of the method with two real-world experiments conducted at Facebook: optimizing a ranking system, and optimizing server compiler flags.

Viaarxiv icon

The decoupled extended Kalman filter for dynamic exponential-family factorization models

Jun 26, 2018
Carlos Alberto Gomez-Uribe, Brian Karrer

Figure 1 for The decoupled extended Kalman filter for dynamic exponential-family factorization models
Figure 2 for The decoupled extended Kalman filter for dynamic exponential-family factorization models

We specialize the decoupled extended Kalman filter (DEKF) for online parameter learning in factorization models, including factorization machines, matrix and tensor factorization, and illustrate the effectiveness of the approach through simulations. Learning model parameters through the DEKF makes factorization models more broadly useful by allowing for more flexible observations through the entire exponential family, modeling parameter drift, and producing parameter uncertainty estimates that can enable explore/exploit and other applications. We use a more general dynamics of the parameters than the standard DEKF, allowing parameter drift while encouraging reasonable values. We also present an alternate derivation of the regular extended Kalman filter and DEKF that connects these methods to natural gradient methods, and suggests a similarly decoupled version of the iterated extended Kalman filter.

Viaarxiv icon

End-to-end Planning of Fixed Millimeter-Wave Networks

May 20, 2017
Tim Danford, Onur Filiz, Jing Huang, Brian Karrer, Manohar Paluri, Guan Pang, Vish Ponnampalam, Nicolas Stier-Moses, Birce Tezel

Figure 1 for End-to-end Planning of Fixed Millimeter-Wave Networks
Figure 2 for End-to-end Planning of Fixed Millimeter-Wave Networks
Figure 3 for End-to-end Planning of Fixed Millimeter-Wave Networks
Figure 4 for End-to-end Planning of Fixed Millimeter-Wave Networks

This article discusses a framework to support the design and end-to-end planning of fixed millimeter-wave networks. Compared to traditional techniques, the framework allows an organization to quickly plan a deployment in a cost-effective way. We start by using LiDAR data---basically, a 3D point cloud captured from a city---to estimate potential sites to deploy antennas and whether there is line-of-sight between them. With that data on hand, we use combinatorial optimization techniques to determine the optimal set of locations and how they should communicate with each other, to satisfy engineering (e.g., latency, polarity), design (e.g., reliability) and financial (e.g., total cost of operation) constraints. The primary goal is to connect as many people as possible to the network. Our methodology can be used for strategic planning when an organization is in the process of deciding whether to adopt a millimeter-wave technology or choosing between locations, or for operational planning when conducting a detailed design of the actual network to be deployed in a selected location.

Viaarxiv icon