Alert button
Picture for Jayant Gupta

Jayant Gupta

Alert button

University of Minnesota

A Survey on Solving and Discovering Differential Equations Using Deep Neural Networks

Apr 26, 2023
Hyeonjung, Jung, Jayant Gupta, Bharat Jayaprakash, Matthew Eagon, Harish Panneer Selvam, Carl Molnar, William Northrop, Shashi Shekhar

Figure 1 for A Survey on Solving and Discovering Differential Equations Using Deep Neural Networks
Figure 2 for A Survey on Solving and Discovering Differential Equations Using Deep Neural Networks
Figure 3 for A Survey on Solving and Discovering Differential Equations Using Deep Neural Networks
Figure 4 for A Survey on Solving and Discovering Differential Equations Using Deep Neural Networks

Ordinary and partial differential equations (DE) are used extensively in scientific and mathematical domains to model physical systems. Current literature has focused primarily on deep neural network (DNN) based methods for solving a specific DE or a family of DEs. Research communities with a history of using DE models may view DNN-based differential equation solvers (DNN-DEs) as a faster and transferable alternative to current numerical methods. However, there is a lack of systematic surveys detailing the use of DNN-DE methods across physical application domains and a generalized taxonomy to guide future research. This paper surveys and classifies previous works and provides an educational tutorial for senior practitioners, professionals, and graduate students in engineering and computer science. First, we propose a taxonomy to navigate domains of DE systems studied under the umbrella of DNN-DE. Second, we examine the theory and performance of the Physics Informed Neural Network (PINN) to demonstrate how the influential DNN-DE architecture mathematically solves a system of equations. Third, to reinforce the key ideas of solving and discovery of DEs using DNN, we provide a tutorial using DeepXDE, a Python package for developing PINNs, to develop DNN-DEs for solving and discovering a classic DE, the linear transport equation.

* Under review for ACM Computing Surveys journal. 29 pages 
Viaarxiv icon

Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results

Oct 29, 2021
Jayant Gupta, Carl Molnar, Gaoxiang Luo, Joe Knight, Shashi Shekhar

Figure 1 for Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results
Figure 2 for Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results
Figure 3 for Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results
Figure 4 for Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results

Given Spatial Variability Aware Neural Networks (SVANNs), the goal is to investigate mathematical (or computational) models for comparative physical interpretation towards their transparency (e.g., simulatibility, decomposability and algorithmic transparency). This problem is important due to important use-cases such as reusability, debugging, and explainability to a jury in a court of law. Challenges include a large number of model parameters, vacuous bounds on generalization performance of neural networks, risk of overfitting, sensitivity to noise, etc., which all detract from the ability to interpret the models. Related work on either model-specific or model-agnostic post-hoc interpretation is limited due to a lack of consideration of physical constraints (e.g., mass balance) and properties (e.g., second law of geography). This work investigates physical interpretation of SVANNs using novel comparative approaches based on geographically heterogeneous features. The proposed approach on feature-based physical interpretation is evaluated using a case-study on wetland mapping. The proposed physical interpretation improves the transparency of SVANN models and the analytical results highlight the trade-off between model transparency and model performance (e.g., F1-score). We also describe an interpretation based on geographically heterogeneous processes modeled as partial differential equations (PDEs).

* Submission to SIGSPATIAL 2021 peer review process. The document contains 12 pages (including 2 pages of appendix) 
Viaarxiv icon

Towards Spatial Variability Aware Deep Neural Networks (SVANN): A Summary of Results

Nov 17, 2020
Jayant Gupta, Yiqun Xie, Shashi Shekhar

Figure 1 for Towards Spatial Variability Aware Deep Neural Networks (SVANN): A Summary of Results
Figure 2 for Towards Spatial Variability Aware Deep Neural Networks (SVANN): A Summary of Results
Figure 3 for Towards Spatial Variability Aware Deep Neural Networks (SVANN): A Summary of Results
Figure 4 for Towards Spatial Variability Aware Deep Neural Networks (SVANN): A Summary of Results

Spatial variability has been observed in many geo-phenomena including climatic zones, USDA plant hardiness zones, and terrestrial habitat types (e.g., forest, grasslands, wetlands, and deserts). However, current deep learning methods follow a spatial-one-size-fits-all(OSFA) approach to train single deep neural network models that do not account for spatial variability. In this work, we propose and investigate a spatial-variability aware deep neural network(SVANN) approach, where distinct deep neural network models are built for each geographic area. We evaluate this approach using aerial imagery from two geographic areas for the task of mapping urban gardens. The experimental results show that SVANN provides better performance than OSFA in terms of precision, recall,and F1-score to identify urban gardens.

* Accepted in 1st ACM SIGKDD Workshop on Deep Learning for Spatiotemporal Data, Applications, and Systems (Deepspatial 2020), San Diego, CA, August 24, 2020 
Viaarxiv icon