Alert button
Picture for Radu Balan

Radu Balan

Alert button

Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations

Mar 04, 2023
Xiongye Xiao, Defu Cao, Ruochen Yang, Gaurav Gupta, Gengshuo Liu, Chenzhong Yin, Radu Balan, Paul Bogdan

Figure 1 for Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations
Figure 2 for Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations
Figure 3 for Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations
Figure 4 for Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations

Coupled partial differential equations (PDEs) are key tasks in modeling the complex dynamics of many physical processes. Recently, neural operators have shown the ability to solve PDEs by learning the integral kernel directly in Fourier/Wavelet space, so the difficulty for solving the coupled PDEs depends on dealing with the coupled mappings between the functions. Towards this end, we propose a \textit{coupled multiwavelets neural operator} (CMWNO) learning scheme by decoupling the coupled integral kernels during the multiwavelet decomposition and reconstruction procedures in the Wavelet space. The proposed model achieves significantly higher accuracy compared to previous learning-based solvers in solving the coupled PDEs including Gray-Scott (GS) equations and the non-local mean field game (MFG) problem. According to our experimental results, the proposed model exhibits a $2\times \sim 4\times$ improvement relative $L$2 error compared to the best results from the state-of-the-art models.

* International Conference on Learning Representations (ICLR-2023)  
* Accepted to ICLR 2023 
Viaarxiv icon

VQ-Flows: Vector Quantized Local Normalizing Flows

Mar 22, 2022
Sahil Sidheekh, Chris B. Dock, Tushar Jain, Radu Balan, Maneesh K. Singh

Figure 1 for VQ-Flows: Vector Quantized Local Normalizing Flows
Figure 2 for VQ-Flows: Vector Quantized Local Normalizing Flows
Figure 3 for VQ-Flows: Vector Quantized Local Normalizing Flows
Figure 4 for VQ-Flows: Vector Quantized Local Normalizing Flows

Normalizing flows provide an elegant approach to generative modeling that allows for efficient sampling and exact density evaluation of unknown data distributions. However, current techniques have significant limitations in their expressivity when the data distribution is supported on a low-dimensional manifold or has a non-trivial topology. We introduce a novel statistical framework for learning a mixture of local normalizing flows as "chart maps" over the data manifold. Our framework augments the expressivity of recent approaches while preserving the signature property of normalizing flows, that they admit exact density evaluation. We learn a suitable atlas of charts for the data manifold via a vector quantized auto-encoder (VQ-AE) and the distributions over them using a conditional flow. We validate experimentally that our probabilistic framework enables existing approaches to better model data distributions over complex manifolds.

Viaarxiv icon

Permutation Invariant Representations with Applications to Graph Deep Learning

Mar 14, 2022
Radu Balan, Naveed Haghani, Maneesh Singh

Figure 1 for Permutation Invariant Representations with Applications to Graph Deep Learning
Figure 2 for Permutation Invariant Representations with Applications to Graph Deep Learning
Figure 3 for Permutation Invariant Representations with Applications to Graph Deep Learning
Figure 4 for Permutation Invariant Representations with Applications to Graph Deep Learning

This paper presents primarily two Euclidean embeddings of the quotient space generated by matrices that are identified modulo arbitrary row permutations. The original application is in deep learning on graphs where the learning task is invariant to node relabeling. Two embedding schemes are introduced, one based on sorting and the other based on algebras of multivariate polynomials. While both embeddings exhibit a computational complexity exponential in problem size, the sorting based embedding is globally bi-Lipschitz and admits a low dimensional target space. Additionally, an almost everywhere injective scheme can be implemented with minimal redundancy and low computational cost. In turn, this proves that almost any classifier can be implemented with an arbitrary small loss of performance. Numerical experiments are carried out on two data sets, a chemical compound data set (QM9) and a proteins data set (PROTEINS).

* 43 pages, 13 figures, 16 tables 
Viaarxiv icon

Convergence Guarantees for Deep Epsilon Greedy Policy Learning

Dec 02, 2021
Michael Rawson, Radu Balan

Figure 1 for Convergence Guarantees for Deep Epsilon Greedy Policy Learning
Figure 2 for Convergence Guarantees for Deep Epsilon Greedy Policy Learning

Policy learning is a quickly growing area. As robotics and computers control day-to-day life, their error rate needs to be minimized and controlled. There are many policy learning methods and provable error rates that accompany them. We show an error or regret bound and convergence of the Deep Epsilon Greedy method which chooses actions with a neural network's prediction. In experiments with the real-world dataset MNIST, we construct a nonlinear reinforcement learning problem. We witness how with either high or low noise, some methods do and some do not converge which agrees with our proof of convergence.

Viaarxiv icon

An Exact Hypergraph Matching Algorithm for Nuclear Identification in Embryonic Caenorhabditis elegans

Apr 20, 2021
Andrew Lauziere, Ryan Christensen, Hari Shroff, Radu Balan

Figure 1 for An Exact Hypergraph Matching Algorithm for Nuclear Identification in Embryonic Caenorhabditis elegans
Figure 2 for An Exact Hypergraph Matching Algorithm for Nuclear Identification in Embryonic Caenorhabditis elegans
Figure 3 for An Exact Hypergraph Matching Algorithm for Nuclear Identification in Embryonic Caenorhabditis elegans
Figure 4 for An Exact Hypergraph Matching Algorithm for Nuclear Identification in Embryonic Caenorhabditis elegans

Finding an optimal correspondence between point sets is a common task in computer vision. Existing techniques assume relatively simple relationships among points and do not guarantee an optimal match. We introduce an algorithm capable of exactly solving point set matching by modeling the task as hypergraph matching. The algorithm extends the classical branch and bound paradigm to select and aggregate vertices under a proposed decomposition of the multilinear objective function. The methodology is motivated by Caenorhabditis elegans, a model organism used frequently in developmental biology and neurobiology. The embryonic C. elegans contains seam cells that can act as fiducial markers allowing the identification of other nuclei during embryo development. The proposed algorithm identifies seam cells more accurately than established point-set matching methods, while providing a framework to approach other similarly complex point set matching tasks.

* 20 pages, 11 figures 
Viaarxiv icon

On Lipschitz Bounds of General Convolutional Neural Networks

Aug 04, 2018
Dongmian Zou, Radu Balan, Maneesh Singh

Figure 1 for On Lipschitz Bounds of General Convolutional Neural Networks
Figure 2 for On Lipschitz Bounds of General Convolutional Neural Networks
Figure 3 for On Lipschitz Bounds of General Convolutional Neural Networks
Figure 4 for On Lipschitz Bounds of General Convolutional Neural Networks

Many convolutional neural networks (CNNs) have a feed-forward structure. In this paper, a linear program that estimates the Lipschitz bound of such CNNs is proposed. Several CNNs, including the scattering networks, the AlexNet and the GoogleNet, are studied numerically and compared to the theoretical bounds. Next, concentration inequalities of the output distribution to a stationary random input signal expressed in terms of the Lipschitz bound are established. The Lipschitz bound is further used to establish a nonlinear discriminant analysis designed to measure the separation between features of different classes.

* 26 pages, 20 figures 
Viaarxiv icon

Learning flexible representations of stochastic processes on graphs

Mar 13, 2018
Addison Bohannon, Brian Sadler, Radu Balan

Figure 1 for Learning flexible representations of stochastic processes on graphs
Figure 2 for Learning flexible representations of stochastic processes on graphs

Graph convolutional networks adapt the architecture of convolutional neural networks to learn rich representations of data supported on arbitrary graphs by replacing the convolution operations of convolutional neural networks with graph-dependent linear operations. However, these graph-dependent linear operations are developed for scalar functions supported on undirected graphs. We propose a class of linear operations for stochastic (time-varying) processes on directed (or undirected) graphs to be used in graph convolutional networks. We propose a parameterization of such linear operations using functional calculus to achieve arbitrarily low learning complexity. The proposed approach is shown to model richer behaviors and display greater flexibility in learning representations than product graph methods.

Viaarxiv icon

Lipschitz Properties for Deep Convolutional Networks

Jan 18, 2017
Radu Balan, Maneesh Singh, Dongmian Zou

Figure 1 for Lipschitz Properties for Deep Convolutional Networks
Figure 2 for Lipschitz Properties for Deep Convolutional Networks
Figure 3 for Lipschitz Properties for Deep Convolutional Networks
Figure 4 for Lipschitz Properties for Deep Convolutional Networks

In this paper we discuss the stability properties of convolutional neural networks. Convolutional neural networks are widely used in machine learning. In classification they are mainly used as feature extractors. Ideally, we expect similar features when the inputs are from the same class. That is, we hope to see a small change in the feature vector with respect to a deformation on the input signal. This can be established mathematically, and the key step is to derive the Lipschitz properties. Further, we establish that the stability results can be extended for more general networks. We give a formula for computing the Lipschitz bound, and compare it with other methods to show it is closer to the optimal value.

* 25 pages, 10 figures 
Viaarxiv icon

Phase Retrieval using Lipschitz Continuous Maps

Mar 10, 2014
Radu Balan, Dongmian Zou

In this note we prove that reconstruction from magnitudes of frame coefficients (the so called "phase retrieval problem") can be performed using Lipschitz continuous maps. Specifically we show that when the nonlinear analysis map $\alpha:{\mathcal H}\rightarrow\mathbb{R}^m$ is injective, with $(\alpha(x))_k=|<x,f_k>|^2$, where $\{f_1,\ldots,f_m\}$ is a frame for the Hilbert space ${\mathcal H}$, then there exists a left inverse map $\omega:\mathbb{R}^m\rightarrow {\mathcal H}$ that is Lipschitz continuous. Additionally we obtain the Lipschitz constant of this inverse map in terms of the lower Lipschitz constant of $\alpha$. Surprisingly the increase in Lipschitz constant is independent of the space dimension or frame redundancy.

* 12 pages, 1 figure 
Viaarxiv icon

Stability of Phase Retrievable Frames

Aug 25, 2013
Radu Balan

In this paper we study the property of phase retrievability by redundant sysems of vectors under perturbations of the frame set. Specifically we show that if a set $\fc$ of $m$ vectors in the complex Hilbert space of dimension n allows for vector reconstruction from magnitudes of its coefficients, then there is a perturbation bound $\rho$ so that any frame set within $\rho$ from $\fc$ has the same property. In particular this proves the recent construction in \cite{BH13} is stable under perturbations. By the same token we reduce the critical cardinality conjectured in \cite{BCMN13a} to proving a stability result for non phase-retrievable frames.

* 13 pages, presented at SPIE 2013 conference 
Viaarxiv icon