Alert button
Picture for Venkatasubramanian Viswanathan

Venkatasubramanian Viswanathan

Alert button

Differentiable Turbulence II

Jul 25, 2023
Varun Shankar, Romit Maulik, Venkatasubramanian Viswanathan

Differentiable fluid simulators are increasingly demonstrating value as useful tools for developing data-driven models in computational fluid dynamics (CFD). Differentiable turbulence, or the end-to-end training of machine learning (ML) models embedded in CFD solution algorithms, captures both the generalization power and limited upfront cost of physics-based simulations, and the flexibility and automated training of deep learning methods. We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations, applying the technique to learn a sub-grid scale closure using a multi-scale graph neural network. We demonstrate the method on several realizations of flow over a backwards-facing step, testing on both unseen Reynolds numbers and new geometry. We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x. As the desire and need for cheaper CFD simulations grows, we see hybrid physics-ML methods as a path forward to be exploited in the near future.

Viaarxiv icon

Differentiable Turbulence

Jul 07, 2023
Varun Shankar, Romit Maulik, Venkatasubramanian Viswanathan

Figure 1 for Differentiable Turbulence
Figure 2 for Differentiable Turbulence
Figure 3 for Differentiable Turbulence
Figure 4 for Differentiable Turbulence

Deep learning is increasingly becoming a promising pathway to improving the accuracy of sub-grid scale (SGS) turbulence closure models for large eddy simulations (LES). We leverage the concept of differentiable turbulence, whereby an end-to-end differentiable solver is used in combination with physics-inspired choices of deep learning architectures to learn highly effective and versatile SGS models for two-dimensional turbulent flow. We perform an in-depth analysis of the inductive biases in the chosen architectures, finding that the inclusion of small-scale non-local features is most critical to effective SGS modeling, while large-scale features can improve pointwise accuracy of the a-posteriori solution field. The filtered velocity gradient tensor can be mapped directly to the SGS stress via decomposition of the inputs and outputs into isotropic, deviatoric, and anti-symmetric components. We see that the model can generalize to a variety of flow configurations, including higher and lower Reynolds numbers and different forcing conditions. We show that the differentiable physics paradigm is more successful than offline, a-priori learning, and that hybrid solver-in-the-loop approaches to deep learning offer an ideal balance between computational efficiency, accuracy, and generalization. Our experiments provide physics-based recommendations for deep-learning based SGS modeling for generalizable closure modeling of turbulence.

Viaarxiv icon

Chemellia: An Ecosystem for Atomistic Scientific Machine Learning

May 19, 2023
Anant Thazhemadam, Dhairya Gandhi, Venkatasubramanian Viswanathan, Rachel C. Kurchin

Figure 1 for Chemellia: An Ecosystem for Atomistic Scientific Machine Learning
Figure 2 for Chemellia: An Ecosystem for Atomistic Scientific Machine Learning
Figure 3 for Chemellia: An Ecosystem for Atomistic Scientific Machine Learning

Chemellia is an open-source framework for atomistic machine learning in the Julia programming language. The framework takes advantage of Julia's high speed as well as the ability to share and reuse code and interfaces through the paradigm of multiple dispatch. Chemellia is designed to make use of existing interfaces and avoid ``reinventing the wheel'' wherever possible. A key aspect of the Chemellia ecosystem is the ChemistryFeaturization interface for defining and encoding features -- it is designed to maximize interoperability between featurization schemes and elements thereof, to maintain provenance of encoded features, and to ensure easy decodability and reconfigurability to enable feature engineering experiments. This embodies the overall design principles of the Chemellia ecosystem: separation of concerns, interoperability, and transparency. We illustrate these principles by discussing the implementation of crystal graph convolutional neural networks for material property prediction.

Viaarxiv icon

Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning

Feb 17, 2023
Shivam Barwey, Varun Shankar, Venkatasubramanian Viswanathan, Romit Maulik

Figure 1 for Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning
Figure 2 for Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning
Figure 3 for Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning
Figure 4 for Multiscale Graph Neural Network Autoencoders for Interpretable Scientific Machine Learning

The goal of this work is to address two limitations in autoencoder-based models: latent space interpretability and compatibility with unstructured meshes. This is accomplished here with the development of a novel graph neural network (GNN) autoencoding architecture with demonstrations on complex fluid flow applications. To address the first goal of interpretability, the GNN autoencoder achieves reduction in the number nodes in the encoding stage through an adaptive graph reduction procedure. This reduction procedure essentially amounts to flowfield-conditioned node sampling and sensor identification, and produces interpretable latent graph representations tailored to the flowfield reconstruction task in the form of so-called masked fields. These masked fields allow the user to (a) visualize where in physical space a given latent graph is active, and (b) interpret the time-evolution of the latent graph connectivity in accordance with the time-evolution of unsteady flow features (e.g. recirculation zones, shear layers) in the domain. To address the goal of unstructured mesh compatibility, the autoencoding architecture utilizes a series of multi-scale message passing (MMP) layers, each of which models information exchange among node neighborhoods at various lengthscales. The MMP layer, which augments standard single-scale message passing with learnable coarsening operations, allows the decoder to more efficiently reconstruct the flowfield from the identified regions in the masked fields. Analysis of latent graphs produced by the autoencoder for various model settings are conducted using using unstructured snapshot data sourced from large-eddy simulations in a backward-facing step (BFS) flow configuration with an OpenFOAM-based flow solver at high Reynolds numbers.

* 30 pages, 17 figures. Correction: Fixed authorship 
Viaarxiv icon

Differentiable physics-enabled closure modeling for Burgers' turbulence

Sep 23, 2022
Varun Shankar, Vedant Puri, Ramesh Balakrishnan, Romit Maulik, Venkatasubramanian Viswanathan

Figure 1 for Differentiable physics-enabled closure modeling for Burgers' turbulence
Figure 2 for Differentiable physics-enabled closure modeling for Burgers' turbulence
Figure 3 for Differentiable physics-enabled closure modeling for Burgers' turbulence
Figure 4 for Differentiable physics-enabled closure modeling for Burgers' turbulence

Data-driven turbulence modeling is experiencing a surge in interest following algorithmic and hardware developments in the data sciences. We discuss an approach using the differentiable physics paradigm that combines known physics with machine learning to develop closure models for Burgers' turbulence. We consider the 1D Burgers system as a prototypical test problem for modeling the unresolved terms in advection-dominated turbulence problems. We train a series of models that incorporate varying degrees of physical assumptions on an a posteriori loss function to test the efficacy of models across a range of system parameters, including viscosity, time, and grid resolution. We find that constraining models with inductive biases in the form of partial differential equations that contain known physics or existing closure approaches produces highly data-efficient, accurate, and generalizable models, outperforming state-of-the-art baselines. Addition of structure in the form of physics information also brings a level of interpretability to the models, potentially offering a stepping stone to the future of closure modeling.

Viaarxiv icon

Score-Based Generative Models for Molecule Generation

Mar 07, 2022
Dwaraknath Gnaneshwar, Bharath Ramsundar, Dhairya Gandhi, Rachel Kurchin, Venkatasubramanian Viswanathan

Figure 1 for Score-Based Generative Models for Molecule Generation
Figure 2 for Score-Based Generative Models for Molecule Generation
Figure 3 for Score-Based Generative Models for Molecule Generation

Recent advances in generative models have made exploring design spaces easier for de novo molecule generation. However, popular generative models like GANs and normalizing flows face challenges such as training instabilities due to adversarial training and architectural constraints, respectively. Score-based generative models sidestep these challenges by modelling the gradient of the log probability density using a score function approximation, as opposed to modelling the density function directly, and sampling from it using annealed Langevin Dynamics. We believe that score-based generative models could open up new opportunities in molecule generation due to their architectural flexibility, such as replacing the score function with an SE(3) equivariant model. In this work, we lay the foundations by testing the efficacy of score-based models for molecule generation. We train a Transformer-based score function on Self-Referencing Embedded Strings (SELFIES) representations of 1.5 million samples from the ZINC dataset and use the Moses benchmarking framework to evaluate the generated samples on a suite of metrics.

Viaarxiv icon

Autonomous optimization of nonaqueous battery electrolytes via robotic experimentation and machine learning

Nov 23, 2021
Adarsh Dave, Jared Mitchell, Sven Burke, Hongyi Lin, Jay Whitacre, Venkatasubramanian Viswanathan

Figure 1 for Autonomous optimization of nonaqueous battery electrolytes via robotic experimentation and machine learning
Figure 2 for Autonomous optimization of nonaqueous battery electrolytes via robotic experimentation and machine learning
Figure 3 for Autonomous optimization of nonaqueous battery electrolytes via robotic experimentation and machine learning
Figure 4 for Autonomous optimization of nonaqueous battery electrolytes via robotic experimentation and machine learning

In this work, we introduce a novel workflow that couples robotics to machine-learning for efficient optimization of a non-aqueous battery electrolyte. A custom-built automated experiment named "Clio" is coupled to Dragonfly - a Bayesian optimization-based experiment planner. Clio autonomously optimizes electrolyte conductivity over a single-salt, ternary solvent design space. Using this workflow, we identify 6 fast-charging electrolytes in 2 work-days and 42 experiments (compared with 60 days using exhaustive search of the 1000 possible candidates, or 6 days assuming only 10% of candidates are evaluated). Our method finds the highest reported conductivity electrolyte in a design space heavily explored by previous literature, converging on a high-conductivity mixture that demonstrates subtle electrolyte chemical physics.

* 26 pages, 5 Figures, 7 Extended Data Figures 
Viaarxiv icon

Differentiable Physics: A Position Piece

Sep 14, 2021
Bharath Ramsundar, Dilip Krishnamurthy, Venkatasubramanian Viswanathan

Figure 1 for Differentiable Physics: A Position Piece

Differentiable physics provides a new approach for modeling and understanding the physical systems by pairing the new technology of differentiable programming with classical numerical methods for physical simulation. We survey the rapidly growing literature of differentiable physics techniques and highlight methods for parameter estimation, learning representations, solving differential equations, and developing what we call scientific foundation models using data and inductive priors. We argue that differentiable physics offers a new paradigm for modeling physical phenomena by combining classical analytic solutions with numerical methodology using the bridge of differentiable programming.

* 12 pages, 1 figure 
Viaarxiv icon

ACED: Accelerated Computational Electrochemical systems Discovery

Nov 10, 2020
Rachel C. Kurchin, Eric Muckley, Lance Kavalsky, Vinay Hegde, Dhairya Gandhi, Xiaoyu Sun, Matthew Johnson, Alan Edelman, James Saal, Christopher Vincent Rackauckas, Bryce Meredig, Viral Shah, Venkatasubramanian Viswanathan

Figure 1 for ACED: Accelerated Computational Electrochemical systems Discovery

Large-scale electrification is vital to addressing the climate crisis, but many engineering challenges remain to fully electrifying both the chemical industry and transportation. In both of these areas, new electrochemical materials and systems will be critical, but developing these systems currently relies heavily on computationally expensive first-principles simulations as well as human-time-intensive experimental trial and error. We propose to develop an automated workflow that accelerates these computational steps by introducing both automated error handling in generating the first-principles training data as well as physics-informed machine learning surrogates to further reduce computational cost. It will also have the capacity to include automated experiments "in the loop" in order to dramatically accelerate the overall materials discovery pipeline.

* 4 pages, 1 figure, accepted to NeurIPS Climate Change and AI Workshop 2020 updating because one author's email was missing 
Viaarxiv icon

Closed-Loop Design of Proton Donors for Lithium-Mediated Ammonia Synthesis with Interpretable Models and Molecular Machine Learning

Aug 19, 2020
Dilip Krishnamurthy, Nikifar Lazouski, Michal L. Gala, Karthish Manthiram, Venkatasubramanian Viswanathan

Figure 1 for Closed-Loop Design of Proton Donors for Lithium-Mediated Ammonia Synthesis with Interpretable Models and Molecular Machine Learning
Figure 2 for Closed-Loop Design of Proton Donors for Lithium-Mediated Ammonia Synthesis with Interpretable Models and Molecular Machine Learning
Figure 3 for Closed-Loop Design of Proton Donors for Lithium-Mediated Ammonia Synthesis with Interpretable Models and Molecular Machine Learning
Figure 4 for Closed-Loop Design of Proton Donors for Lithium-Mediated Ammonia Synthesis with Interpretable Models and Molecular Machine Learning

In this work, we experimentally determined the efficacy of several classes of proton donors for lithium-mediated electrochemical nitrogen reduction in a tetrahydrofuran-based electrolyte, an attractive alternative method for producing ammonia. We then built an interpretable data-driven classification model which identified solvatochromic Kamlet-Taft parameters as important for distinguishing between active and inactive proton donors. After curating a dataset for the Kamlet-Taft parameters, we trained a deep learning model to predict the Kamlet-Taft parameters. The combination of classification model and deep learning model provides a predictive mapping from a given proton donor to the ability to produce ammonia. We demonstrate that this combination of classification model with deep learning is superior to a purely mechanistic or data-driven approach in accuracy and experimental data efficiency.

* 27 pages, 6 figures, 30 pages of Supporting Information 
Viaarxiv icon