Alert button
Picture for Marta D'Elia

Marta D'Elia

Alert button

GNN-based physics solver for time-independent PDEs

Mar 28, 2023
Rini Jasmine Gladstone, Helia Rahmani, Vishvas Suryakumar, Hadi Meidani, Marta D'Elia, Ahmad Zareei

Figure 1 for GNN-based physics solver for time-independent PDEs
Figure 2 for GNN-based physics solver for time-independent PDEs
Figure 3 for GNN-based physics solver for time-independent PDEs
Figure 4 for GNN-based physics solver for time-independent PDEs

Physics-based deep learning frameworks have shown to be effective in accurately modeling the dynamics of complex physical systems with generalization capability across problem inputs. However, time-independent problems pose the challenge of requiring long-range exchange of information across the computational domain for obtaining accurate predictions. In the context of graph neural networks (GNNs), this calls for deeper networks, which, in turn, may compromise or slow down the training process. In this work, we present two GNN architectures to overcome this challenge - the Edge Augmented GNN and the Multi-GNN. We show that both these networks perform significantly better (by a factor of 1.5 to 2) than baseline methods when applied to time-independent solid mechanics problems. Furthermore, the proposed architectures generalize well to unseen domains, boundary conditions, and materials. Here, the treatment of variable domains is facilitated by a novel coordinate transformation that enables rotation and translation invariance. By broadening the range of problems that neural operators based on graph neural networks can tackle, this paper provides the groundwork for their application to complex scientific and industrial settings.

* 12 pages, 2 figures 
Viaarxiv icon

Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures

Jan 11, 2023
Huaiqian You, Xiao Xu, Yue Yu, Stewart Silling, Marta D'Elia, John Foster

Figure 1 for Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures
Figure 2 for Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures
Figure 3 for Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures
Figure 4 for Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures

Molecular dynamics (MD) has served as a powerful tool for designing materials with reduced reliance on laboratory testing. However, the use of MD directly to treat the deformation and failure of materials at the mesoscale is still largely beyond reach. Herein, we propose a learning framework to extract a peridynamic model as a mesoscale continuum surrogate from MD simulated material fracture datasets. Firstly, we develop a novel coarse-graining method, to automatically handle the material fracture and its corresponding discontinuities in MD displacement dataset. Inspired by the Weighted Essentially Non-Oscillatory scheme, the key idea lies at an adaptive procedure to automatically choose the locally smoothest stencil, then reconstruct the coarse-grained material displacement field as piecewise smooth solutions containing discontinuities. Then, based on the coarse-grained MD data, a two-phase optimization-based learning approach is proposed to infer the optimal peridynamics model with damage criterion. In the first phase, we identify the optimal nonlocal kernel function from datasets without material damage, to capture the material stiffness properties. Then, in the second phase, the material damage criterion is learnt as a smoothed step function from the data with fractures. As a result, a peridynamics surrogate is obtained. Our peridynamics surrogate model can be employed in further prediction tasks with different grid resolutions from training, and hence allows for substantial reductions in computational cost compared with MD. We illustrate the efficacy of the proposed approach with several numerical tests for single layer graphene. Our tests show that the proposed data-driven model is robust and generalizable: it is capable in modeling the initialization and growth of fractures under discretization and loading settings that are different from the ones used during training.

Viaarxiv icon

Probabilistic partition of unity networks for high-dimensional regression problems

Oct 06, 2022
Tiffany Fan, Nathaniel Trask, Marta D'Elia, Eric Darve

Figure 1 for Probabilistic partition of unity networks for high-dimensional regression problems
Figure 2 for Probabilistic partition of unity networks for high-dimensional regression problems
Figure 3 for Probabilistic partition of unity networks for high-dimensional regression problems
Figure 4 for Probabilistic partition of unity networks for high-dimensional regression problems

We explore the probabilistic partition of unity network (PPOU-Net) model in the context of high-dimensional regression problems. With the PPOU-Nets, the target function for any given input is approximated by a mixture of experts model, where each cluster is associated with a fixed-degree polynomial. The weights of the clusters are determined by a DNN that defines a partition of unity. The weighted average of the polynomials approximates the target function and produces uncertainty quantification naturally. Our training strategy leverages automatic differentiation and the expectation maximization (EM) algorithm. During the training, we (i) apply gradient descent to update the DNN coefficients; (ii) update the polynomial coefficients using weighted least-squares solves; and (iii) compute the variance of each cluster according to a closed-form formula derived from the EM algorithm. The PPOU-Nets consistently outperform the baseline fully-connected neural networks of comparable sizes in numerical experiments of various data dimensions. We also explore the proposed model in applications of quantum computing, where the PPOU-Nets act as surrogate models for cost landscapes associated with variational quantum circuits.

Viaarxiv icon

Machine Learning in Heterogeneous Porous Materials

Feb 04, 2022
Marta D'Elia, Hang Deng, Cedric Fraces, Krishna Garikipati, Lori Graham-Brady, Amanda Howard, George Karniadakis, Vahid Keshavarzzadeh, Robert M. Kirby, Nathan Kutz, Chunhui Li, Xing Liu, Hannah Lu, Pania Newell, Daniel O'Malley, Masa Prodanovic, Gowri Srinivasan, Alexandre Tartakovsky, Daniel M. Tartakovsky, Hamdi Tchelepi, Bozo Vazic, Hari Viswanathan, Hongkyu Yoon, Piotr Zarzycki

Figure 1 for Machine Learning in Heterogeneous Porous Materials
Figure 2 for Machine Learning in Heterogeneous Porous Materials
Figure 3 for Machine Learning in Heterogeneous Porous Materials
Figure 4 for Machine Learning in Heterogeneous Porous Materials

The "Workshop on Machine learning in heterogeneous porous materials" brought together international scientific communities of applied mathematics, porous media, and material sciences with experts in the areas of heterogeneous materials, machine learning (ML) and applied mathematics to identify how ML can advance materials research. Within the scope of ML and materials research, the goal of the workshop was to discuss the state-of-the-art in each community, promote crosstalk and accelerate multi-disciplinary collaborative research, and identify challenges and opportunities. As the end result, four topic areas were identified: ML in predicting materials properties, and discovery and design of novel materials, ML in porous and fractured media and time-dependent phenomena, Multi-scale modeling in heterogeneous porous materials via ML, and Discovery of materials constitutive laws and new governing equations. This workshop was part of the AmeriMech Symposium series sponsored by the National Academies of Sciences, Engineering and Medicine and the U.S. National Committee on Theoretical and Applied Mechanics.

* The workshop link is: https://amerimech.mech.utah.edu 
Viaarxiv icon

Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep Neural Network

Jan 06, 2022
Huaiqian You, Yue Yu, Marta D'Elia, Tian Gao, Stewart Silling

Figure 1 for Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep Neural Network
Figure 2 for Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep Neural Network
Figure 3 for Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep Neural Network
Figure 4 for Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep Neural Network

Neural operators have recently become popular tools for designing solution maps between function spaces in the form of neural networks. Differently from classical scientific machine learning approaches that learn parameters of a known partial differential equation (PDE) for a single instance of the input parameters at a fixed resolution, neural operators approximate the solution map of a family of PDEs. Despite their success, the uses of neural operators are so far restricted to relatively shallow neural networks and confined to learning hidden governing laws. In this work, we propose a novel nonlocal neural operator, which we refer to as nonlocal kernel network (NKN), that is resolution independent, characterized by deep neural networks, and capable of handling a variety of tasks such as learning governing equations and classifying images. Our NKN stems from the interpretation of the neural network as a discrete nonlocal diffusion reaction equation that, in the limit of infinite layers, is equivalent to a parabolic nonlocal equation, whose stability is analyzed via nonlocal vector calculus. The resemblance with integral forms of neural operators allows NKNs to capture long-range dependencies in the feature space, while the continuous treatment of node-to-node interactions makes NKNs resolution independent. The resemblance with neural ODEs, reinterpreted in a nonlocal sense, and the stable network dynamics between layers allow for generalization of NKN's optimal parameters from shallow to deep networks. This fact enables the use of shallow-to-deep initialization techniques. Our tests show that NKNs outperform baseline methods in both learning governing equations and image classification tasks and generalize well to different resolutions and depths.

Viaarxiv icon

A data-driven peridynamic continuum model for upscaling molecular dynamics

Aug 04, 2021
Huaiqian You, Yue Yu, Stewart Silling, Marta D'Elia

Figure 1 for A data-driven peridynamic continuum model for upscaling molecular dynamics
Figure 2 for A data-driven peridynamic continuum model for upscaling molecular dynamics
Figure 3 for A data-driven peridynamic continuum model for upscaling molecular dynamics
Figure 4 for A data-driven peridynamic continuum model for upscaling molecular dynamics

Nonlocal models, including peridynamics, often use integral operators that embed lengthscales in their definition. However, the integrands in these operators are difficult to define from the data that are typically available for a given physical system, such as laboratory mechanical property tests. In contrast, molecular dynamics (MD) does not require these integrands, but it suffers from computational limitations in the length and time scales it can address. To combine the strengths of both methods and to obtain a coarse-grained, homogenized continuum model that efficiently and accurately captures materials' behavior, we propose a learning framework to extract, from MD data, an optimal Linear Peridynamic Solid (LPS) model as a surrogate for MD displacements. To maximize the accuracy of the learnt model we allow the peridynamic influence function to be partially negative, while preserving the well-posedness of the resulting model. To achieve this, we provide sufficient well-posedness conditions for discretized LPS models with sign-changing influence functions and develop a constrained optimization algorithm that minimizes the equation residual while enforcing such solvability conditions. This framework guarantees that the resulting model is mathematically well-posed, physically consistent, and that it generalizes well to settings that are different from the ones used during training. We illustrate the efficacy of the proposed approach with several numerical tests for single layer graphene. Our two-dimensional tests show the robustness of the proposed algorithm on validation data sets that include thermal noise, different domain shapes and external loadings, and discretizations substantially different from the ones used for training.

Viaarxiv icon

Data-driven learning of nonlocal models: from high-fidelity simulations to constitutive laws

Dec 08, 2020
Huaiqian You, Yue Yu, Stewart Silling, Marta D'Elia

Figure 1 for Data-driven learning of nonlocal models: from high-fidelity simulations to constitutive laws
Figure 2 for Data-driven learning of nonlocal models: from high-fidelity simulations to constitutive laws
Figure 3 for Data-driven learning of nonlocal models: from high-fidelity simulations to constitutive laws
Figure 4 for Data-driven learning of nonlocal models: from high-fidelity simulations to constitutive laws

We show that machine learning can improve the accuracy of simulations of stress waves in one-dimensional composite materials. We propose a data-driven technique to learn nonlocal constitutive laws for stress wave propagation models. The method is an optimization-based technique in which the nonlocal kernel function is approximated via Bernstein polynomials. The kernel, including both its functional form and parameters, is derived so that when used in a nonlocal solver, it generates solutions that closely match high-fidelity data. The optimal kernel therefore acts as a homogenized nonlocal continuum model that accurately reproduces wave motion in a smaller-scale, more detailed model that can include multiple materials. We apply this technique to wave propagation within a heterogeneous bar with a periodic microstructure. Several one-dimensional numerical tests illustrate the accuracy of our algorithm. The optimal kernel is demonstrated to reproduce high-fidelity data for a composite material in applications that are substantially different from the problems used as training data.

Viaarxiv icon

Data-driven learning of robust nonlocal physics from high-fidelity synthetic data

May 17, 2020
Huaiqian You, Yue Yu, Nathaniel Trask, Mamikon Gulian, Marta D'Elia

Figure 1 for Data-driven learning of robust nonlocal physics from high-fidelity synthetic data
Figure 2 for Data-driven learning of robust nonlocal physics from high-fidelity synthetic data
Figure 3 for Data-driven learning of robust nonlocal physics from high-fidelity synthetic data
Figure 4 for Data-driven learning of robust nonlocal physics from high-fidelity synthetic data

A key challenge to nonlocal models is the analytical complexity of deriving them from first principles, and frequently their use is justified a posteriori. In this work we extract nonlocal models from data, circumventing these challenges and providing data-driven justification for the resulting model form. Extracting provably robust data-driven surrogates is a major challenge for machine learning (ML) approaches, due to nonlinearities and lack of convexity. Our scheme allows extraction of provably invertible nonlocal models whose kernels may be partially negative. To achieve this, based on established nonlocal theory, we embed in our algorithm sufficient conditions on the non-positive part of the kernel that guarantee well-posedness of the learnt operator. These conditions are imposed as inequality constraints and ensure that models are robust, even in small-data regimes. We demonstrate this workflow for a range of applications, including reproduction of manufactured nonlocal kernels; numerical homogenization of Darcy flow associated with a heterogeneous periodic microstructure; nonlocal approximation to high-order local transport phenomena; and approximation of globally supported fractional diffusion operators by truncated kernels.

* 32 pages, 10 figures, 3 tables 
Viaarxiv icon

nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications

Apr 08, 2020
Guofei Pang, Marta D'Elia, Michael Parks, George E. Karniadakis

Figure 1 for nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications
Figure 2 for nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications
Figure 3 for nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications
Figure 4 for nPINNs: nonlocal Physics-Informed Neural Networks for a parametrized nonlocal universal Laplacian operator. Algorithms and Applications

Physics-informed neural networks (PINNs) are effective in solving inverse problems based on differential and integral equations with sparse, noisy, unstructured, and multi-fidelity data. PINNs incorporate all available information into a loss function, thus recasting the original problem into an optimization problem. In this paper, we extend PINNs to parameter and function inference for integral equations such as nonlocal Poisson and nonlocal turbulence models, and we refer to them as nonlocal PINNs (nPINNs). The contribution of the paper is three-fold. First, we propose a unified nonlocal operator, which converges to the classical Laplacian as one of the operator parameters, the nonlocal interaction radius $\delta$ goes to zero, and to the fractional Laplacian as $\delta$ goes to infinity. This universal operator forms a super-set of classical Laplacian and fractional Laplacian operators and, thus, has the potential to fit a broad spectrum of data sets. We provide theoretical convergence rates with respect to $\delta$ and verify them via numerical experiments. Second, we use nPINNs to estimate the two parameters, $\delta$ and $\alpha$. The strong non-convexity of the loss function yielding multiple (good) local minima reveals the occurrence of the operator mimicking phenomenon: different pairs of estimated parameters could produce multiple solutions of comparable accuracy. Third, we propose another nonlocal operator with spatially variable order $\alpha(y)$, which is more suitable for modeling turbulent Couette flow. Our results show that nPINNs can jointly infer this function as well as $\delta$. Also, these parameters exhibit a universal behavior with respect to the Reynolds number, a finding that contributes to our understanding of nonlocal interactions in wall-bounded turbulence.

* 31 pages, 20 figures 
Viaarxiv icon