Alert button
Picture for Daniel O'Malley

Daniel O'Malley

Alert button

Progressive reduced order modeling: empowering data-driven modeling with selective knowledge transfer

Oct 04, 2023
Teeratorn Kadeethum, Daniel O'Malley, Youngsoo Choi, Hari S. Viswanathan, Hongkyu Yoon

Data-driven modeling can suffer from a constant demand for data, leading to reduced accuracy and impractical for engineering applications due to the high cost and scarcity of information. To address this challenge, we propose a progressive reduced order modeling framework that minimizes data cravings and enhances data-driven modeling's practicality. Our approach selectively transfers knowledge from previously trained models through gates, similar to how humans selectively use valuable knowledge while ignoring unuseful information. By filtering relevant information from previous models, we can create a surrogate model with minimal turnaround time and a smaller training set that can still achieve high accuracy. We have tested our framework in several cases, including transport in porous media, gravity-driven flow, and finite deformation in hyperelastic materials. Our results illustrate that retaining information from previous models and utilizing a valuable portion of that knowledge can significantly improve the accuracy of the current model. We have demonstrated the importance of progressive knowledge transfer and its impact on model accuracy with reduced training samples. For instance, our framework with four parent models outperforms the no-parent counterpart trained on data nine times larger. Our research unlocks data-driven modeling's potential for practical engineering applications by mitigating the data scarcity issue. Our proposed framework is a significant step toward more efficient and cost-effective data-driven modeling, fostering advancements across various fields.

Viaarxiv icon

Physics-informed machine learning with differentiable programming for heterogeneous underground reservoir pressure management

Jun 21, 2022
Aleksandra Pachalieva, Daniel O'Malley, Dylan Robert Harp, Hari Viswanathan

Figure 1 for Physics-informed machine learning with differentiable programming for heterogeneous underground reservoir pressure management
Figure 2 for Physics-informed machine learning with differentiable programming for heterogeneous underground reservoir pressure management
Figure 3 for Physics-informed machine learning with differentiable programming for heterogeneous underground reservoir pressure management
Figure 4 for Physics-informed machine learning with differentiable programming for heterogeneous underground reservoir pressure management

Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection. Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface. The heterogeneity typically requires high-fidelity physics-based models to make predictions on CO$_2$ fate. Furthermore, characterizing the heterogeneity accurately is fraught with parametric uncertainty. Accounting for both, heterogeneity and uncertainty, makes this a computationally-intensive problem challenging for current reservoir simulators. To tackle this, we use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization at critical reservoir locations. We use DPFEHM framework, which has trustworthy physics based on the standard two-point flux finite volume discretization and is also automatically differentiable like machine learning models. Our physics-informed machine learning framework uses convolutional neural networks to learn an appropriate extraction rate based on the permeability field. We also perform a hyperparameter search to improve the model's accuracy. Training and testing scenarios are executed to evaluate the feasibility of using physics-informed machine learning to manage reservoir pressures. We constructed and tested a sufficiently accurate simulator that is 400000 times faster than the underlying physics-based simulator, allowing for near real-time analysis and robust uncertainty quantification.

* 12 pages, 5 figures 
Viaarxiv icon

Reduced order modeling with Barlow Twins self-supervised learning: Navigating the space between linear and nonlinear solution manifolds

Feb 11, 2022
Teeratorn Kadeethum, Francesco Ballarin, Daniel O'Malley, Youngsoo Choi, Nikolaos Bouklas, Hongkyu Yoon

Figure 1 for Reduced order modeling with Barlow Twins self-supervised learning: Navigating the space between linear and nonlinear solution manifolds
Figure 2 for Reduced order modeling with Barlow Twins self-supervised learning: Navigating the space between linear and nonlinear solution manifolds
Figure 3 for Reduced order modeling with Barlow Twins self-supervised learning: Navigating the space between linear and nonlinear solution manifolds
Figure 4 for Reduced order modeling with Barlow Twins self-supervised learning: Navigating the space between linear and nonlinear solution manifolds

We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DL-ROM) using deep-convolutional autoencoders (DC-AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DL-ROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT-AE performs better than the previous DL-ROM framework by providing comparable results to POD-based approaches for problems where the solution lies within a linear subspace as well as DL-ROM autoencoder-based techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. Furthermore, this BT-AE framework can operate on unstructured meshes, which provides flexibility in its application to standard numerical solvers, on-site measurements, experimental data, or a combination of these sources.

* arXiv admin note: text overlap with arXiv:2107.11460 
Viaarxiv icon

Machine Learning in Heterogeneous Porous Materials

Feb 04, 2022
Marta D'Elia, Hang Deng, Cedric Fraces, Krishna Garikipati, Lori Graham-Brady, Amanda Howard, George Karniadakis, Vahid Keshavarzzadeh, Robert M. Kirby, Nathan Kutz, Chunhui Li, Xing Liu, Hannah Lu, Pania Newell, Daniel O'Malley, Masa Prodanovic, Gowri Srinivasan, Alexandre Tartakovsky, Daniel M. Tartakovsky, Hamdi Tchelepi, Bozo Vazic, Hari Viswanathan, Hongkyu Yoon, Piotr Zarzycki

Figure 1 for Machine Learning in Heterogeneous Porous Materials
Figure 2 for Machine Learning in Heterogeneous Porous Materials
Figure 3 for Machine Learning in Heterogeneous Porous Materials
Figure 4 for Machine Learning in Heterogeneous Porous Materials

The "Workshop on Machine learning in heterogeneous porous materials" brought together international scientific communities of applied mathematics, porous media, and material sciences with experts in the areas of heterogeneous materials, machine learning (ML) and applied mathematics to identify how ML can advance materials research. Within the scope of ML and materials research, the goal of the workshop was to discuss the state-of-the-art in each community, promote crosstalk and accelerate multi-disciplinary collaborative research, and identify challenges and opportunities. As the end result, four topic areas were identified: ML in predicting materials properties, and discovery and design of novel materials, ML in porous and fractured media and time-dependent phenomena, Multi-scale modeling in heterogeneous porous materials via ML, and Discovery of materials constitutive laws and new governing equations. This workshop was part of the AmeriMech Symposium series sponsored by the National Academies of Sciences, Engineering and Medicine and the U.S. National Committee on Theoretical and Applied Mechanics.

* The workshop link is: https://amerimech.mech.utah.edu 
Viaarxiv icon

A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks

May 27, 2021
Teeratorn Kadeethum, Daniel O'Malley, Jan Niklas Fuhg, Youngsoo Choi, Jonghyun Lee, Hari S. Viswanathan, Nikolaos Bouklas

Figure 1 for A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks
Figure 2 for A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks
Figure 3 for A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks
Figure 4 for A framework for data-driven solution and parameter estimation of PDEs using conditional generative adversarial networks

This work is the first to employ and adapt the image-to-image translation concept based on conditional generative adversarial networks (cGAN) towards learning a forward and an inverse solution operator of partial differential equations (PDEs). Even though the proposed framework could be applied as a surrogate model for the solution of any PDEs, here we focus on steady-state solutions of coupled hydro-mechanical processes in heterogeneous porous media. Strongly heterogeneous material properties, which translate to the heterogeneity of coefficients of the PDEs and discontinuous features in the solutions, require specialized techniques for the forward and inverse solution of these problems. Additionally, parametrization of the spatially heterogeneous coefficients is excessively difficult by using standard reduced order modeling techniques. In this work, we overcome these challenges by employing the image-to-image translation concept to learn the forward and inverse solution operators and utilize a U-Net generator and a patch-based discriminator. Our results show that the proposed data-driven reduced order model has competitive predictive performance capabilities in accuracy and computational efficiency as well as training time requirements compared to state-of-the-art data-driven methods for both forward and inverse problems.

Viaarxiv icon

Uncertainty Bounds for Multivariate Machine Learning Predictions on High-Strain Brittle Fracture

Dec 23, 2020
Cristina Garcia-Cardona, M. Giselle Fernández-Godino, Daniel O'Malley, Tanmoy Bhattacharya

Figure 1 for Uncertainty Bounds for Multivariate Machine Learning Predictions on High-Strain Brittle Fracture
Figure 2 for Uncertainty Bounds for Multivariate Machine Learning Predictions on High-Strain Brittle Fracture
Figure 3 for Uncertainty Bounds for Multivariate Machine Learning Predictions on High-Strain Brittle Fracture
Figure 4 for Uncertainty Bounds for Multivariate Machine Learning Predictions on High-Strain Brittle Fracture

Simulation of the crack network evolution on high strain rate impact experiments performed in brittle materials is very compute-intensive. The cost increases even more if multiple simulations are needed to account for the randomness in crack length, location, and orientation, which is inherently found in real-world materials. Constructing a machine learning emulator can make the process faster by orders of magnitude. There has been little work, however, on assessing the error associated with their predictions. Estimating these errors is imperative for meaningful overall uncertainty quantification. In this work, we extend the heteroscedastic uncertainty estimates to bound a multiple output machine learning emulator. We find that the response prediction is robust with a somewhat conservative estimate of uncertainty.

Viaarxiv icon

Reverse Annealing for Nonnegative/Binary Matrix Factorization

Jul 10, 2020
John Golden, Daniel O'Malley

Figure 1 for Reverse Annealing for Nonnegative/Binary Matrix Factorization
Figure 2 for Reverse Annealing for Nonnegative/Binary Matrix Factorization
Figure 3 for Reverse Annealing for Nonnegative/Binary Matrix Factorization
Figure 4 for Reverse Annealing for Nonnegative/Binary Matrix Factorization

It was recently shown that quantum annealing can be used as an effective, fast subroutine in certain types of matrix factorization algorithms. The quantum annealing algorithm performed best for quick, approximate answers, but performance rapidly plateaued. In this paper, we utilize reverse annealing instead of forward annealing in the quantum annealing subroutine for nonnegative/binary matrix factorization problems. After an initial global search with forward annealing, reverse annealing performs a series of local searches that refine existing solutions. The combination of forward and reverse annealing significantly improves performance compared to forward annealing alone for all but the shortest run times.

* 9 pages, 5 figures 
Viaarxiv icon

Learning to regularize with a variational autoencoder for hydrologic inverse analysis

Jun 06, 2019
Daniel O'Malley, John K. Golden, Velimir V. Vesselinov

Figure 1 for Learning to regularize with a variational autoencoder for hydrologic inverse analysis
Figure 2 for Learning to regularize with a variational autoencoder for hydrologic inverse analysis
Figure 3 for Learning to regularize with a variational autoencoder for hydrologic inverse analysis
Figure 4 for Learning to regularize with a variational autoencoder for hydrologic inverse analysis

Inverse problems often involve matching observational data using a physical model that takes a large number of parameters as input. These problems tend to be under-constrained and require regularization to impose additional structure on the solution in parameter space. A central difficulty in regularization is turning a complex conceptual model of this additional structure into a functional mathematical form to be used in the inverse analysis. In this work we propose a method of regularization involving a machine learning technique known as a variational autoencoder (VAE). The VAE is trained to map a low-dimensional set of latent variables with a simple structure to the high-dimensional parameter space that has a complex structure. We train a VAE on unconditioned realizations of the parameters for a hydrological inverse problem. These unconditioned realizations neither rely on the observational data used to perform the inverse analysis nor require any forward runs of the physical model, thus making the computational cost of generating the training data minimal. The central benefit of this approach is that regularization is then performed on the latent variables from the VAE, which can be regularized simply. A second benefit of this approach is that the VAE reduces the number of variables in the optimization problem, thus making gradient-based optimization more computationally efficient when adjoint methods are unavailable. After performing regularization and optimization on the latent variables, the VAE then decodes the problem back to the original parameter space. Our approach constitutes a novel framework for regularization and optimization, readily applicable to a wide range of inverse problems. We call the approach RegAE.

Viaarxiv icon

Nonnegative/binary matrix factorization with a D-Wave quantum annealer

Apr 05, 2017
Daniel O'Malley, Velimir V. Vesselinov, Boian S. Alexandrov, Ludmil B. Alexandrov

Figure 1 for Nonnegative/binary matrix factorization with a D-Wave quantum annealer
Figure 2 for Nonnegative/binary matrix factorization with a D-Wave quantum annealer
Figure 3 for Nonnegative/binary matrix factorization with a D-Wave quantum annealer
Figure 4 for Nonnegative/binary matrix factorization with a D-Wave quantum annealer

D-Wave quantum annealers represent a novel computational architecture and have attracted significant interest, but have been used for few real-world computations. Machine learning has been identified as an area where quantum annealing may be useful. Here, we show that the D-Wave 2X can be effectively used as part of an unsupervised machine learning method. This method can be used to analyze large datasets. The D-Wave only limits the number of features that can be extracted from the dataset. We apply this method to learn the features from a set of facial images.

Viaarxiv icon