Alert button
Picture for Richard G. Baraniuk

Richard G. Baraniuk

Alert button

InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers

Dec 09, 2019
Tan M. Nguyen, Animesh Garg, Richard G. Baraniuk, Anima Anandkumar

Figure 1 for InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers
Figure 2 for InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers
Figure 3 for InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers
Figure 4 for InfoCNF: An Efficient Conditional Continuous Normalizing Flow with Adaptive Solvers

Continuous Normalizing Flows (CNFs) have emerged as promising deep generative models for a wide range of tasks thanks to their invertibility and exact likelihood estimation. However, conditioning CNFs on signals of interest for conditional image generation and downstream predictive tasks is inefficient due to the high-dimensional latent code generated by the model, which needs to be of the same size as the input data. In this paper, we propose InfoCNF, an efficient conditional CNF that partitions the latent space into a class-specific supervised code and an unsupervised code that shared among all classes for efficient use of labeled information. Since the partitioning strategy (slightly) increases the number of function evaluations (NFEs), InfoCNF also employs gating networks to learn the error tolerances of its ordinary differential equation (ODE) solvers for better speed and performance. We show empirically that InfoCNF improves the test accuracy over the baseline while yielding comparable likelihood scores and reducing the NFEs on CIFAR10. Furthermore, applying the same partitioning strategy in InfoCNF on time-series data helps improve extrapolation performance.

* 17 pages, 14 figures, 2 tables 
Viaarxiv icon

The Implicit Regularization of Ordinary Least Squares Ensembles

Oct 10, 2019
Daniel LeJeune, Hamid Javadi, Richard G. Baraniuk

Figure 1 for The Implicit Regularization of Ordinary Least Squares Ensembles
Figure 2 for The Implicit Regularization of Ordinary Least Squares Ensembles
Figure 3 for The Implicit Regularization of Ordinary Least Squares Ensembles
Figure 4 for The Implicit Regularization of Ordinary Least Squares Ensembles

Ensemble methods that average over a collection of independent predictors that are each limited to a subsampling of both the examples and features of the training data command a significant presence in machine learning, such as the ever-popular random forest, yet the nature of the subsampling effect, particularly of the features, is not well understood. We study the case of an ensemble of linear predictors, where each individual predictor is fit using ordinary least squares on a random submatrix of the data matrix. We show that, under standard Gaussianity assumptions, when the number of features selected for each predictor is optimally tuned, the asymptotic risk of a large ensemble is equal to the asymptotic ridge regression risk, which is known to be optimal among linear predictors in this setting. In addition to eliciting this implicit regularization that results from subsampling, we also connect this ensemble to the dropout technique used in training deep (neural) networks, another strategy that has been shown to have a ridge-like regularizing effect.

* 21 pages, 4 figures 
Viaarxiv icon

Drawing early-bird tickets: Towards more efficient training of deep networks

Sep 26, 2019
Haoran You, Chaojian Li, Pengfei Xu, Yonggan Fu, Yue Wang, Xiaohan Chen, Yingyan Lin, Zhangyang Wang, Richard G. Baraniuk

Figure 1 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 2 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 3 for Drawing early-bird tickets: Towards more efficient training of deep networks
Figure 4 for Drawing early-bird tickets: Towards more efficient training of deep networks

(Frankle & Carbin, 2019) shows that there exist winning tickets (small but critical subnetworks) for dense, randomly initialized networks, that can be trained alone to achieve comparable accuracies to the latter in a similar number of iterations. However, the identification of these winning tickets still requires the costly train-prune-retrain process, limiting their practical benefits. In this paper, we discover for the first time that the winning tickets can be identified at the very early training stage, which we term as early-bird (EB) tickets, via low-cost training schemes (e.g., early stopping and low-precision training) at large learning rates. Our finding of EB tickets is consistent with recently reported observations that the key connectivity patterns of neural networks emerge early. Furthermore, we propose a mask distance metric that can be used to identify EB tickets with low computational overhead, without needing to know the true winning tickets that emerge after the full training. Finally, we leverage the existence of EB tickets and the proposed mask distance to develop efficient training methods, which are achieved by first identifying EB tickets via low-cost schemes, and then continuing to train merely the EB tickets towards the target accuracy. Experiments based on various deep networks and datasets validate: 1) the existence of EB tickets, and the effectiveness of mask distance in efficiently identifying them; and 2) that the proposed efficient training via EB tickets can achieve up to 4.7x energy savings while maintaining comparable or even better accuracy, demonstrating a promising and easily adopted method for tackling cost-prohibitive deep network training.

Viaarxiv icon

Out-of-Distribution Detection Using Neural Rendering Generative Models

Jul 10, 2019
Yujia Huang, Sihui Dai, Tan Nguyen, Richard G. Baraniuk, Anima Anandkumar

Figure 1 for Out-of-Distribution Detection Using Neural Rendering Generative Models
Figure 2 for Out-of-Distribution Detection Using Neural Rendering Generative Models
Figure 3 for Out-of-Distribution Detection Using Neural Rendering Generative Models
Figure 4 for Out-of-Distribution Detection Using Neural Rendering Generative Models

Out-of-distribution (OoD) detection is a natural downstream task for deep generative models, due to their ability to learn the input probability distribution. There are mainly two classes of approaches for OoD detection using deep generative models, viz., based on likelihood measure and the reconstruction loss. However, both approaches are unable to carry out OoD detection effectively, especially when the OoD samples have smaller variance than the training samples. For instance, both flow based and VAE models assign higher likelihood to images from SVHN when trained on CIFAR-10 images. We use a recently proposed generative model known as neural rendering model (NRM) and derive metrics for OoD. We show that NRM unifies both approaches since it provides a likelihood estimate and also carries out reconstruction in each layer of the neural network. Among various measures, we found the joint likelihood of latent variables to be the most effective one for OoD detection. Our results show that when trained on CIFAR-10, lower likelihood (of latent variables) is assigned to SVHN images. Additionally, we show that this metric is consistent across other OoD datasets. To the best of our knowledge, this is the first work to show consistently lower likelihood for OoD data with smaller variance with deep generative models.

Viaarxiv icon

IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election

May 30, 2019
Indu Manickam, Andrew S. Lan, Gautam Dasarathy, Richard G. Baraniuk

Figure 1 for IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election
Figure 2 for IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election
Figure 3 for IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election
Figure 4 for IdeoTrace: A Framework for Ideology Tracing with a Case Study on the 2016 U.S. Presidential Election

The 2016 United States presidential election has been characterized as a period of extreme divisiveness that was exacerbated on social media by the influence of fake news, trolls, and social bots. However, the extent to which the public became more polarized in response to these influences over the course of the election is not well understood. In this paper we propose IdeoTrace, a framework for (i) jointly estimating the ideology of social media users and news websites and (ii) tracing changes in user ideology over time. We apply this framework to the last two months of the election period for a group of 47508 Twitter users and demonstrate that both liberal and conservative users became more polarized over time.

* 9 pages, 4 figures, submitted to ASONAM 2019 
Viaarxiv icon

Thresholding Graph Bandits with GrAPL

May 22, 2019
Daniel LeJeune, Gautam Dasarathy, Richard G. Baraniuk

Figure 1 for Thresholding Graph Bandits with GrAPL
Figure 2 for Thresholding Graph Bandits with GrAPL
Figure 3 for Thresholding Graph Bandits with GrAPL

In this paper, we introduce a new online decision making paradigm that we call Thresholding Graph Bandits. The main goal is to efficiently identify a subset of arms in a multi-armed bandit problem whose means are above a specified threshold. While traditionally in such problems, the arms are assumed to be independent, in our paradigm we further suppose that we have access to the similarity between the arms in the form of a graph, allowing us gain information about the arm means in fewer samples. Such settings play a key role in a wide range of modern decision making problems where rapid decisions need to be made in spite of the large number of options available at each time. We present GrAPL, a novel algorithm for the thresholding graph bandit problem. We demonstrate theoretically that this algorithm is effective in taking advantage of the graph structure when available and the reward function homophily (that strongly connected arms have similar rewards) when favorable. We confirm these theoretical findings via experiments on both synthetic and real data.

* 15 pages, 3 figures 
Viaarxiv icon

RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data

Apr 09, 2019
Benjamin Coleman, Anshumali Shrivastava, Richard G. Baraniuk

Figure 1 for RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data
Figure 2 for RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data
Figure 3 for RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data
Figure 4 for RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data

We present the first sublinear memory sketch which can be queried to find the $v$ nearest neighbors in a dataset. Our online sketching algorithm can compress an $N$-element dataset to a sketch of size $O(N^b \log^3{N})$ in $O(N^{b+1} \log^3{N})$ time, where $b < 1$ when the query satisfies a data-dependent near-neighbor stability condition. We achieve data-dependent sublinear space by combining recent advances in locality sensitive hashing (LSH)-based estimators with compressed sensing. Our results shed new light on the memory-accuracy tradeoff for near-neighbor search. The techniques presented reveal a deep connection between the fundamental compressed sensing (or heavy hitters) recovery problem and near-neighbor search, leading to new insight for geometric search problems and implications for sketching algorithms.

Viaarxiv icon

Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

Feb 27, 2019
Joshua J. Michalenko, Ameesh Shah, Abhinav Verma, Richard G. Baraniuk, Swarat Chaudhuri, Ankit B. Patel

Figure 1 for Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Figure 2 for Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Figure 3 for Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks
Figure 4 for Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks

We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into "superstates". A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.

* 15 Pages, 13 Figures, Accepted to ICLR 2019 
Viaarxiv icon

Adaptive Estimation for Approximate k-Nearest-Neighbor Computations

Feb 25, 2019
Daniel LeJeune, Richard G. Baraniuk, Reinhard Heckel

Figure 1 for Adaptive Estimation for Approximate k-Nearest-Neighbor Computations
Figure 2 for Adaptive Estimation for Approximate k-Nearest-Neighbor Computations

Algorithms often carry out equally many computations for "easy" and "hard" problem instances. In particular, algorithms for finding nearest neighbors typically have the same running time regardless of the particular problem instance. In this paper, we consider the approximate k-nearest-neighbor problem, which is the problem of finding a subset of O(k) points in a given set of points that contains the set of k nearest neighbors of a given query point. We propose an algorithm based on adaptively estimating the distances, and show that it is essentially optimal out of algorithms that are only allowed to adaptively estimate distances. We then demonstrate both theoretically and experimentally that the algorithm can achieve significant speedups relative to the naive method.

* 11 pages, 2 figures. To appear in AISTATS 2019 
Viaarxiv icon