Alert button
Picture for Ryan P. Adams

Ryan P. Adams

Alert button

Generative Marginalization Models

Oct 19, 2023
Sulin Liu, Peter J. Ramadge, Ryan P. Adams

We introduce marginalization models (MaMs), a new family of generative models for high-dimensional discrete data. They offer scalable and flexible generative modeling with tractable likelihoods by explicitly modeling all induced marginal distributions. Marginalization models enable fast evaluation of arbitrary marginal probabilities with a single forward pass of the neural network, which overcomes a major limitation of methods with exact marginal inference, such as autoregressive models (ARMs). We propose scalable methods for learning the marginals, grounded in the concept of "marginalization self-consistency". Unlike previous methods, MaMs support scalable training of any-order generative models for high-dimensional problems under the setting of energy-based training, where the goal is to match the learned distribution to a given desired probability (specified by an unnormalized (log) probability function such as energy function or reward function). We demonstrate the effectiveness of the proposed model on a variety of discrete data distributions, including binary images, language, physical systems, and molecules, for maximum likelihood and energy-based training settings. MaMs achieve orders of magnitude speedup in evaluating the marginal probabilities on both settings. For energy-based training tasks, MaMs enable any-order generative modeling of high-dimensional problems beyond the capability of previous methods. Code is at https://github.com/PrincetonLIPS/MaM.

Viaarxiv icon

Representing and Learning Functions Invariant Under Crystallographic Groups

Jun 08, 2023
Ryan P. Adams, Peter Orbanz

Figure 1 for Representing and Learning Functions Invariant Under Crystallographic Groups
Figure 2 for Representing and Learning Functions Invariant Under Crystallographic Groups
Figure 3 for Representing and Learning Functions Invariant Under Crystallographic Groups
Figure 4 for Representing and Learning Functions Invariant Under Crystallographic Groups

Crystallographic groups describe the symmetries of crystals and other repetitive structures encountered in nature and the sciences. These groups include the wallpaper and space groups. We derive linear and nonlinear representations of functions that are (1) smooth and (2) invariant under such a group. The linear representation generalizes the Fourier basis to crystallographically invariant basis functions. We show that such a basis exists for each crystallographic group, that it is orthonormal in the relevant $L_2$ space, and recover the standard Fourier basis as a special case for pure shift groups. The nonlinear representation embeds the orbit space of the group into a finite-dimensional Euclidean space. We show that such an embedding exists for every crystallographic group, and that it factors functions through a generalization of a manifold called an orbifold. We describe algorithms that, given a standardized description of the group, compute the Fourier basis and an embedding map. As examples, we construct crystallographically invariant neural networks, kernel machines, and Gaussian processes.

Viaarxiv icon

Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity

Jan 31, 2023
Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P. Adams

Figure 1 for Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Figure 2 for Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Figure 3 for Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Figure 4 for Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity

Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as ``mechanical intelligence'' or ``morphological computation''. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures -- which we refer to as neuromechanical autoencoders -- we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials -- cellular solids, in particular -- as the morphological substrate. Just as deep neural networks provide flexible and massively-parametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a ``digital MNIST'' task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.

* ICLR 2023 Spotlight 
Viaarxiv icon

Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh

Nov 03, 2022
Tian Qin, Alex Beatson, Deniz Oktay, Nick McGreivy, Ryan P. Adams

Figure 1 for Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh
Figure 2 for Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh
Figure 3 for Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh
Figure 4 for Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh

Partial differential equations (PDEs) are often computationally challenging to solve, and in many settings many related PDEs must be be solved either at every timestep or for a variety of candidate boundary conditions, parameters, or geometric domains. We present a meta-learning based method which learns to rapidly solve problems from a distribution of related PDEs. We use meta-learning (MAML and LEAP) to identify initializations for a neural network representation of the PDE solution such that a residual of the PDE can be quickly minimized on a novel task. We apply our meta-solving approach to a nonlinear Poisson's equation, 1D Burgers' equation, and hyperelasticity equations with varying parameters, geometries, and boundary conditions. The resulting Meta-PDE method finds qualitatively accurate solutions to most problems within a few gradient steps; for the nonlinear Poisson and hyper-elasticity equation this results in an intermediate accuracy approximation up to an order of magnitude faster than a baseline finite element analysis (FEA) solver with equivalent accuracy. In comparison to other learned solvers and surrogate models, this meta-learning approach can be trained without supervision from expensive ground-truth data, does not require a mesh, and can even be used when the geometry and topology varies between tasks.

Viaarxiv icon

Multi-fidelity Monte Carlo: a pseudo-marginal approach

Oct 04, 2022
Diana Cai, Ryan P. Adams

Figure 1 for Multi-fidelity Monte Carlo: a pseudo-marginal approach
Figure 2 for Multi-fidelity Monte Carlo: a pseudo-marginal approach
Figure 3 for Multi-fidelity Monte Carlo: a pseudo-marginal approach
Figure 4 for Multi-fidelity Monte Carlo: a pseudo-marginal approach

Markov chain Monte Carlo (MCMC) is an established approach for uncertainty quantification and propagation in scientific applications. A key challenge in applying MCMC to scientific domains is computation: the target density of interest is often a function of expensive computations, such as a high-fidelity physical simulation, an intractable integral, or a slowly-converging iterative algorithm. Thus, using an MCMC algorithms with an expensive target density becomes impractical, as these expensive computations need to be evaluated at each iteration of the algorithm. In practice, these computations often approximated via a cheaper, low-fidelity computation, leading to bias in the resulting target density. Multi-fidelity MCMC algorithms combine models of varying fidelities in order to obtain an approximate target density with lower computational cost. In this paper, we describe a class of asymptotically exact multi-fidelity MCMC algorithms for the setting where a sequence of models of increasing fidelity can be computed that approximates the expensive target density of interest. We take a pseudo-marginal MCMC approach for multi-fidelity inference that utilizes a cheaper, randomized-fidelity unbiased estimator of the target fidelity constructed via random truncation of a telescoping series of the low-fidelity sequence of models. Finally, we discuss and evaluate the proposed multi-fidelity MCMC approach on several applications, including log-Gaussian Cox process modeling, Bayesian ODE system identification, PDE-constrained optimization, and Gaussian process regression parameter inference.

* 22 pages, 7 figures 
Viaarxiv icon

ProBF: Learning Probabilistic Safety Certificates with Barrier Functions

Dec 24, 2021
Athindran Ramesh Kumar, Sulin Liu, Jaime F. Fisac, Ryan P. Adams, Peter J. Ramadge

Figure 1 for ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
Figure 2 for ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
Figure 3 for ProBF: Learning Probabilistic Safety Certificates with Barrier Functions
Figure 4 for ProBF: Learning Probabilistic Safety Certificates with Barrier Functions

Safety-critical applications require controllers/policies that can guarantee safety with high confidence. The control barrier function is a useful tool to guarantee safety if we have access to the ground-truth system dynamics. In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors due to unmodeled residual dynamics. Learning the residual dynamics with deterministic machine learning models can prevent the unsafe behavior but can fail when the predictions are imperfect. In this situation, a probabilistic learning method that reasons about the uncertainty of its predictions can help provide robust safety margins. In this work, we use a Gaussian process to model the projection of the residual dynamics onto a control barrier function. We propose a novel optimization procedure to generate safe controls that can guarantee safety with high probability. The safety filter is provided with the ability to reason about the uncertainty of the predictions from the GP. We show the efficacy of this method through experiments on Segway and Quadrotor simulations. Our proposed probabilistic approach is able to reduce the number of safety violations significantly as compared to the deterministic approach with a neural network.

* Presented at NeurIPS 2021 workshop - Safe and Robust Control of Uncertain Systems 
Viaarxiv icon

Vitruvion: A Generative Model of Parametric CAD Sketches

Sep 29, 2021
Ari Seff, Wenda Zhou, Nick Richardson, Ryan P. Adams

Figure 1 for Vitruvion: A Generative Model of Parametric CAD Sketches
Figure 2 for Vitruvion: A Generative Model of Parametric CAD Sketches
Figure 3 for Vitruvion: A Generative Model of Parametric CAD Sketches
Figure 4 for Vitruvion: A Generative Model of Parametric CAD Sketches

Parametric computer-aided design (CAD) tools are the predominant way that engineers specify physical structures, from bicycle pedals to airplanes to printed circuit boards. The key characteristic of parametric CAD is that design intent is encoded not only via geometric primitives, but also by parameterized constraints between the elements. This relational specification can be viewed as the construction of a constraint program, allowing edits to coherently propagate to other parts of the design. Machine learning offers the intriguing possibility of accelerating the design process via generative modeling of these structures, enabling new tools such as autocompletion, constraint inference, and conditional synthesis. In this work, we present such an approach to generative modeling of parametric CAD sketches, which constitute the basic computational building blocks of modern mechanical design. Our model, trained on real-world designs from the SketchGraphs dataset, autoregressively synthesizes sketches as sequences of primitives, with initial coordinates, and constraints that reference back to the sampled primitives. As samples from the model match the constraint graph representation used in standard CAD software, they may be directly imported, solved, and edited according to downstream design tasks. In addition, we condition the model on various contexts, including partial sketches (primers) and images of hand-drawn sketches. Evaluation of the proposed approach demonstrates its ability to synthesize realistic CAD sketches and its potential to aid the mechanical design workflow.

Viaarxiv icon

Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability

Jul 13, 2021
Dibya Ghosh, Jad Rahme, Aviral Kumar, Amy Zhang, Ryan P. Adams, Sergey Levine

Figure 1 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 2 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 3 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability
Figure 4 for Why Generalization in RL is Difficult: Epistemic POMDPs and Implicit Partial Observability

Generalization is a central challenge for the deployment of reinforcement learning (RL) systems in the real world. In this paper, we show that the sequential structure of the RL problem necessitates new approaches to generalization beyond the well-studied techniques used in supervised learning. While supervised learning methods can generalize effectively without explicitly accounting for epistemic uncertainty, we show that, perhaps surprisingly, this is not the case in RL. We show that generalization to unseen test conditions from a limited number of training conditions induces implicit partial observability, effectively turning even fully-observed MDPs into POMDPs. Informed by this observation, we recast the problem of generalization in RL as solving the induced partially observed Markov decision process, which we call the epistemic POMDP. We demonstrate the failure modes of algorithms that do not appropriately handle this partial observability, and suggest a simple ensemble-based technique for approximately solving the partially observed problem. Empirically, we demonstrate that our simple algorithm derived from the epistemic POMDP achieves significant gains in generalization over current methods on the Procgen benchmark suite.

* First two authors contributed equally 
Viaarxiv icon

Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate

Jun 16, 2021
Xingyuan Sun, Tianju Xue, Szymon M. Rusinkiewicz, Ryan P. Adams

Figure 1 for Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate
Figure 2 for Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate
Figure 3 for Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate
Figure 4 for Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate

In design, fabrication, and control problems, we are often faced with the task of synthesis, in which we must generate an object or configuration that satisfies a set of constraints while maximizing one or more objective functions. The synthesis problem is typically characterized by a physical process in which many different realizations may achieve the goal. This many-to-one map presents challenges to the supervised learning of feed-forward synthesis, as the set of viable designs may have a complex structure. In addition, the non-differentiable nature of many physical simulations prevents direct optimization. We address both of these problems with a two-stage neural network architecture that we may consider to be an autoencoder. We first learn the decoder: a differentiable surrogate that approximates the many-to-one physical realization process. We then learn the encoder, which maps from goal to design, while using the fixed decoder to evaluate the quality of the realization. We evaluate the approach on two case studies: extruder path planning in additive manufacturing and constrained soft robot inverse kinematics. We compare our approach to direct optimization of design using the learned surrogate, and to supervised learning of the synthesis problem. We find that our approach produces higher quality solutions than supervised learning, while being competitive in quality with direct optimization, at a greatly reduced computational cost.

* 16 pages, 9 figures 
Viaarxiv icon

Active multi-fidelity Bayesian online changepoint detection

Mar 26, 2021
Gregory W. Gundersen, Diana Cai, Chuteng Zhou, Barbara E. Engelhardt, Ryan P. Adams

Figure 1 for Active multi-fidelity Bayesian online changepoint detection
Figure 2 for Active multi-fidelity Bayesian online changepoint detection
Figure 3 for Active multi-fidelity Bayesian online changepoint detection
Figure 4 for Active multi-fidelity Bayesian online changepoint detection

Online algorithms for detecting changepoints, or abrupt shifts in the behavior of a time series, are often deployed with limited resources, e.g., to edge computing settings such as mobile phones or industrial sensors. In these scenarios it may be beneficial to trade the cost of collecting an environmental measurement against the quality or "fidelity" of this measurement and how the measurement affects changepoint estimation. For instance, one might decide between inertial measurements or GPS to determine changepoints for motion. A Bayesian approach to changepoint detection is particularly appealing because we can represent our posterior uncertainty about changepoints and make active, cost-sensitive decisions about data fidelity to reduce this posterior uncertainty. Moreover, the total cost could be dramatically lowered through active fidelity switching, while remaining robust to changes in data distribution. We propose a multi-fidelity approach that makes cost-sensitive decisions about which data fidelity to collect based on maximizing information gain with respect to changepoints. We evaluate this framework on synthetic, video, and audio data and show that this information-based approach results in accurate predictions while reducing total cost.

Viaarxiv icon