Alert button
Picture for Nima Dehmamy

Nima Dehmamy

Alert button

Latent Space Symmetry Discovery

Sep 29, 2023
Jianke Yang, Nima Dehmamy, Robin Walters, Rose Yu

Equivariant neural networks require explicit knowledge of the symmetry group. Automatic symmetry discovery methods aim to relax this constraint and learn invariance and equivariance from data. However, existing symmetry discovery methods are limited to linear symmetries in their search space and cannot handle the complexity of symmetries in real-world, often high-dimensional data. We propose a novel generative model, Latent LieGAN (LaLiGAN), which can discover nonlinear symmetries from data. It learns a mapping from data to a latent space where the symmetries become linear and simultaneously discovers symmetries in the latent space. Theoretically, we show that our method can express any nonlinear symmetry under certain conditions. Experimentally, our method can capture the intrinsic symmetry in high-dimensional observations, which results in a well-structured latent space that is useful for other downstream tasks. We demonstrate the use cases for LaLiGAN in improving equation discovery and long-term forecasting for various dynamical systems.

Viaarxiv icon

Generative Adversarial Symmetry Discovery

Feb 08, 2023
Jianke Yang, Robin Walters, Nima Dehmamy, Rose Yu

Figure 1 for Generative Adversarial Symmetry Discovery
Figure 2 for Generative Adversarial Symmetry Discovery
Figure 3 for Generative Adversarial Symmetry Discovery
Figure 4 for Generative Adversarial Symmetry Discovery

Despite the success of equivariant neural networks in scientific applications, they require knowing the symmetry group a priori. However, it may be difficult to know the right symmetry to use as an inductive bias in practice and enforcing the wrong symmetry could hurt the performance. In this paper, we propose a framework, LieGAN, to automatically discover equivariances from a dataset using a paradigm akin to generative adversarial training. Specifically, a generator learns a group of transformations applied to the data, which preserves the original distribution and fools the discriminator. LieGAN represents symmetry as interpretable Lie algebra basis and can discover various symmetries such as rotation group $\mathrm{SO}(n)$ and restricted Lorentz group $\mathrm{SO}(1,3)^+$ in trajectory prediction and top quark tagging tasks. The learned symmetry can also be readily used in several existing equivariant neural networks to improve accuracy and generalization in prediction.

Viaarxiv icon

Symmetries, flat minima, and the conserved quantities of gradient flow

Oct 31, 2022
Bo Zhao, Iordan Ganev, Robin Walters, Rose Yu, Nima Dehmamy

Figure 1 for Symmetries, flat minima, and the conserved quantities of gradient flow
Figure 2 for Symmetries, flat minima, and the conserved quantities of gradient flow
Figure 3 for Symmetries, flat minima, and the conserved quantities of gradient flow
Figure 4 for Symmetries, flat minima, and the conserved quantities of gradient flow

Empirical studies of the loss landscape of deep networks have revealed that many local minima are connected through low-loss valleys. Ensemble models sampling different parts of a low-loss valley have reached SOTA performance. Yet, little is known about the theoretical origin of such valleys. We present a general framework for finding continuous symmetries in the parameter space, which carve out low-loss valleys. Importantly, we introduce a novel set of nonlinear, data-dependent symmetries for neural networks. These symmetries can transform a trained model such that it performs similarly on new samples. We then show that conserved quantities associated with linear symmetries can be used to define coordinates along low-loss valleys. The conserved quantities help reveal that using common initialization methods, gradient flow only explores a small part of the global minimum. By relating conserved quantities to convergence rate and sharpness of the minimum, we provide insights on how initialization impacts convergence and generalizability. We also find the nonlinear action to be viable for ensemble building to improve robustness under certain adversarial attacks.

* Preliminary version; comments welcome 
Viaarxiv icon

Faster Optimization on Sparse Graphs via Neural Reparametrization

May 26, 2022
Nima Dehmamy, Csaba Both, Jianzhi Long, Rose Yu

Figure 1 for Faster Optimization on Sparse Graphs via Neural Reparametrization
Figure 2 for Faster Optimization on Sparse Graphs via Neural Reparametrization
Figure 3 for Faster Optimization on Sparse Graphs via Neural Reparametrization
Figure 4 for Faster Optimization on Sparse Graphs via Neural Reparametrization

In mathematical optimization, second-order Newton's methods generally converge faster than first-order methods, but they require the inverse of the Hessian, hence are computationally expensive. However, we discover that on sparse graphs, graph neural networks (GNN) can implement an efficient Quasi-Newton method that can speed up optimization by a factor of 10-100x. Our method, neural reparametrization, modifies the optimization parameters as the output of a GNN to reshape the optimization landscape. Using a precomputed Hessian as the propagation rule, the GNN can effectively utilize the second-order information, reaching a similar effect as adaptive gradient methods. As our method solves optimization through architecture design, it can be used in conjunction with any optimizers such as Adam and RMSProp. We show the application of our method on scientifically relevant problems including heat diffusion, synchronization and persistent homology.

Viaarxiv icon

Symmetry Teleportation for Accelerated Optimization

May 21, 2022
Bo Zhao, Nima Dehmamy, Robin Walters, Rose Yu

Figure 1 for Symmetry Teleportation for Accelerated Optimization
Figure 2 for Symmetry Teleportation for Accelerated Optimization
Figure 3 for Symmetry Teleportation for Accelerated Optimization
Figure 4 for Symmetry Teleportation for Accelerated Optimization

Existing gradient-based optimization methods update the parameters locally, in a direction that minimizes the loss function. We study a different approach, symmetry teleportation, that allows the parameters to travel a large distance on the loss level set, in order to improve the convergence speed in subsequent steps. Teleportation exploits parameter space symmetries of the optimization problem and transforms parameters while keeping the loss invariant. We derive the loss-invariant group actions for test functions and multi-layer neural networks, and prove a necessary condition of when teleportation improves convergence rate. We also show that our algorithm is closely related to second order methods. Experimentally, we show that teleportation improves the convergence speed of gradient descent and AdaGrad for several optimization problems including test functions, multi-layer regressions, and MNIST classification.

Viaarxiv icon

Automatic Symmetry Discovery with Lie Algebra Convolutional Network

Sep 15, 2021
Nima Dehmamy, Robin Walters, Yanchen Liu, Dashun Wang, Rose Yu

Figure 1 for Automatic Symmetry Discovery with Lie Algebra Convolutional Network
Figure 2 for Automatic Symmetry Discovery with Lie Algebra Convolutional Network
Figure 3 for Automatic Symmetry Discovery with Lie Algebra Convolutional Network
Figure 4 for Automatic Symmetry Discovery with Lie Algebra Convolutional Network

Existing equivariant neural networks for continuous groups require discretization or group representations. All these approaches require detailed knowledge of the group parametrization and cannot learn entirely new symmetries. We propose to work with the Lie algebra (infinitesimal generators) instead of the Lie group.Our model, the Lie algebra convolutional network (L-conv) can learn potential symmetries and does not require discretization of the group. We show that L-conv can serve as a building block to construct any group equivariant architecture. We discuss how CNNs and Graph Convolutional Networks are related to and can be expressed as L-conv with appropriate groups. We also derive the MSE loss for a single L-conv layer and find a deep relation with Lagrangians used in physics, with some of the physics aiding in defining generalization and symmetries in the loss landscape. Conversely, L-conv could be used to propose more general equivariant ans\"atze for scientific machine learning.

Viaarxiv icon

3D Topology Transformation with Generative Adversarial Networks

Jul 07, 2020
Luca Stornaiuolo, Nima Dehmamy, Albert-László Barabási, Mauro Martino

Figure 1 for 3D Topology Transformation with Generative Adversarial Networks
Figure 2 for 3D Topology Transformation with Generative Adversarial Networks
Figure 3 for 3D Topology Transformation with Generative Adversarial Networks
Figure 4 for 3D Topology Transformation with Generative Adversarial Networks

Generation and transformation of images and videos using artificial intelligence have flourished over the past few years. Yet, there are only a few works aiming to produce creative 3D shapes, such as sculptures. Here we show a novel 3D-to-3D topology transformation method using Generative Adversarial Networks (GAN). We use a modified pix2pix GAN, which we call Vox2Vox, to transform the volumetric style of a 3D object while retaining the original object shape. In particular, we show how to transform 3D models into two new volumetric topologies - the 3D Network and the Ghirigoro. We describe how to use our approach to construct customized 3D representations. We believe that the generated 3D shapes are novel and inspirational. Finally, we compare the results between our approach and a baseline algorithm that directly convert the 3D shapes, without using our GAN.

Viaarxiv icon

Finding Patient Zero: Learning Contagion Source with Graph Neural Networks

Jun 27, 2020
Chintan Shah, Nima Dehmamy, Nicola Perra, Matteo Chinazzi, Albert-László Barabási, Alessandro Vespignani, Rose Yu

Figure 1 for Finding Patient Zero: Learning Contagion Source with Graph Neural Networks
Figure 2 for Finding Patient Zero: Learning Contagion Source with Graph Neural Networks
Figure 3 for Finding Patient Zero: Learning Contagion Source with Graph Neural Networks
Figure 4 for Finding Patient Zero: Learning Contagion Source with Graph Neural Networks

Locating the source of an epidemic, or patient zero (P0), can provide critical insights into the infection's transmission course and allow efficient resource allocation. Existing methods use graph-theoretic centrality measures and expensive message-passing algorithms, requiring knowledge of the underlying dynamics and its parameters. In this paper, we revisit this problem using graph neural networks (GNNs) to learn P0. We establish a theoretical limit for the identification of P0 in a class of epidemic models. We evaluate our method against different epidemic models on both synthetic and a real-world contact network considering a disease with history and characteristics of COVID-19. % We observe that GNNs can identify P0 close to the theoretical bound on accuracy, without explicit input of dynamics or its parameters. In addition, GNN is over 100 times faster than classic methods for inference on arbitrary graph topologies. Our theoretical bound also shows that the epidemic is like a ticking clock, emphasizing the importance of early contact-tracing. We find a maximum time after which accurate recovery of the source becomes impossible, regardless of the algorithm used.

Viaarxiv icon