Alert button
Picture for Scott Linderman

Scott Linderman

Alert button

Inferring Inference

Oct 13, 2023
Rajkumar Vasudeva Raju, Zhe Li, Scott Linderman, Xaq Pitkow

Figure 1 for Inferring Inference
Figure 2 for Inferring Inference
Figure 3 for Inferring Inference
Figure 4 for Inferring Inference

Patterns of microcircuitry suggest that the brain has an array of repeated canonical computational units. Yet neural representations are distributed, so the relevant computations may only be related indirectly to single-neuron transformations. It thus remains an open challenge how to define canonical distributed computations. We integrate normative and algorithmic theories of neural computation into a mathematical framework for inferring canonical distributed computations from large-scale neural activity patterns. At the normative level, we hypothesize that the brain creates a structured internal model of its environment, positing latent causes that explain its sensory inputs, and uses those sensory inputs to infer the latent causes. At the algorithmic level, we propose that this inference process is a nonlinear message-passing algorithm on a graph-structured model of the world. Given a time series of neural activity during a perceptual inference task, our framework finds (i) the neural representation of relevant latent variables, (ii) interactions between these variables that define the brain's internal model of the world, and (iii) message-functions specifying the inference algorithm. These targeted computational properties are then statistically distinguishable due to the symmetries inherent in any canonical computation, up to a global transformation. As a demonstration, we simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model. Given its external inputs and noisy neural activity, we recover the latent variables, their neural representation and dynamics, and canonical message-functions. We highlight features of experimental design needed to successfully extract canonical computations from neural data. Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.

* 26 pages, 4 figures and 1 supplementary figure 
Viaarxiv icon

NAS-X: Neural Adaptive Smoothing via Twisting

Aug 28, 2023
Dieterich Lawson, Michael Li, Scott Linderman

Figure 1 for NAS-X: Neural Adaptive Smoothing via Twisting
Figure 2 for NAS-X: Neural Adaptive Smoothing via Twisting
Figure 3 for NAS-X: Neural Adaptive Smoothing via Twisting
Figure 4 for NAS-X: Neural Adaptive Smoothing via Twisting

We present Neural Adaptive Smoothing via Twisting (NAS-X), a method for learning and inference in sequential latent variable models based on reweighted wake-sleep (RWS). NAS-X works with both discrete and continuous latent variables, and leverages smoothing SMC to fit a broader range of models than traditional RWS methods. We test NAS-X on discrete and continuous tasks and find that it substantially outperforms previous variational and RWS-based methods in inference and parameter recovery.

Viaarxiv icon

SIXO: Smoothing Inference with Twisted Objectives

Jun 20, 2022
Dieterich Lawson, Allan Raventós, Andrew Warrington, Scott Linderman

Figure 1 for SIXO: Smoothing Inference with Twisted Objectives
Figure 2 for SIXO: Smoothing Inference with Twisted Objectives
Figure 3 for SIXO: Smoothing Inference with Twisted Objectives
Figure 4 for SIXO: Smoothing Inference with Twisted Objectives

Sequential Monte Carlo (SMC) is an inference algorithm for state space models that approximates the posterior by sampling from a sequence of target distributions. The target distributions are often chosen to be the filtering distributions, but these ignore information from future observations, leading to practical and theoretical limitations in inference and model learning. We introduce SIXO, a method that instead learns targets that approximate the smoothing distributions, incorporating information from all observations. The key idea is to use density ratio estimation to fit functions that warp the filtering distributions into the smoothing distributions. We then use SMC with these learned targets to define a variational objective for model and proposal learning. SIXO yields provably tighter log marginal lower bounds and offers significantly more accurate posterior inferences and parameter estimates in a variety of domains.

* v2: Updates for clarity throughout. Results unchanged 
Viaarxiv icon

Streaming Inference for Infinite Non-Stationary Clustering

May 02, 2022
Rylan Schaeffer, Gabrielle Kaili-May Liu, Yilun Du, Scott Linderman, Ila Rani Fiete

Figure 1 for Streaming Inference for Infinite Non-Stationary Clustering
Figure 2 for Streaming Inference for Infinite Non-Stationary Clustering
Figure 3 for Streaming Inference for Infinite Non-Stationary Clustering
Figure 4 for Streaming Inference for Infinite Non-Stationary Clustering

Learning from a continuous stream of non-stationary data in an unsupervised manner is arguably one of the most common and most challenging settings facing intelligent agents. Here, we attack learning under all three conditions (unsupervised, streaming, non-stationary) in the context of clustering, also known as mixture modeling. We introduce a novel clustering algorithm that endows mixture models with the ability to create new clusters online, as demanded by the data, in a probabilistic, time-varying, and principled manner. To achieve this, we first define a novel stochastic process called the Dynamical Chinese Restaurant Process (Dynamical CRP), which is a non-exchangeable distribution over partitions of a set; next, we show that the Dynamical CRP provides a non-stationary prior over cluster assignments and yields an efficient streaming variational inference algorithm. We conclude with experiments showing that the Dynamical CRP can be applied on diverse synthetic and real data with Gaussian and non-Gaussian likelihoods.

* Published at the Workshop on Agent Learning in Open-Endedness (ALOE) at ICLR 2022 
Viaarxiv icon

Bayesian recurrent state space model for rs-fMRI

Nov 14, 2020
Arunesh Mittal, Scott Linderman, John Paisley, Paul Sajda

Figure 1 for Bayesian recurrent state space model for rs-fMRI

We propose a hierarchical Bayesian recurrent state space model for modeling switching network connectivity in resting state fMRI data. Our model allows us to uncover shared network patterns across disease conditions. We evaluate our method on the ADNI2 dataset by inferring latent state patterns corresponding to altered neural circuits in individuals with Mild Cognitive Impairment (MCI). In addition to states shared across healthy and individuals with MCI, we discover latent states that are predominantly observed in individuals with MCI. Our model outperforms current state of the art deep learning method on ADNI2 dataset.

* Machine Learning for Health (ML4H) at NeurIPS 2020 - Extended Abstract 
Viaarxiv icon

Learning Latent Permutations with Gumbel-Sinkhorn Networks

Feb 23, 2018
Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek

Figure 1 for Learning Latent Permutations with Gumbel-Sinkhorn Networks
Figure 2 for Learning Latent Permutations with Gumbel-Sinkhorn Networks
Figure 3 for Learning Latent Permutations with Gumbel-Sinkhorn Networks
Figure 4 for Learning Latent Permutations with Gumbel-Sinkhorn Networks

Permutations and matchings are core building blocks in a variety of latent variable models, as they allow us to align, canonicalize, and sort data. Learning in such models is difficult, however, because exact marginalization over these combinatorial objects is intractable. In response, this paper introduces a collection of new methods for end-to-end learning in such models that approximate discrete maximum-weight matching using the continuous Sinkhorn operator. Sinkhorn iteration is attractive because it functions as a simple, easy-to-implement analog of the softmax operator. With this, we can define the Gumbel-Sinkhorn method, an extension of the Gumbel-Softmax method (Jang et al. 2016, Maddison2016 et al. 2016) to distributions over latent matchings. We demonstrate the effectiveness of our method by outperforming competitive baselines on a range of qualitatively different tasks: sorting numbers, solving jigsaw puzzles, and identifying neural signals in worms.

* ICLR 2018  
Viaarxiv icon