Abstract:Physics-Informed Neural Networks (PINNs) are a novel computational approach for solving partial differential equations (PDEs) with noisy and sparse initial and boundary data. Although, efficient quantification of epistemic and aleatoric uncertainties in big multi-scale problems remains challenging. We propose \$PINN a novel method of computing global uncertainty in PDEs using a Bayesian framework, by combining local Bayesian Physics-Informed Neural Networks (BPINN) with domain decomposition. The solution continuity across subdomains is obtained by imposing the flux continuity across the interface of neighboring subdomains. To demonstrate the effectiveness of \$PINN, we conduct a series of computational experiments on PDEs in 1D and 2D spatial domains. Although we have adopted conservative PINNs (cPINNs), the method can be seamlessly extended to other domain decomposition techniques. The results infer that the proposed method recovers the global uncertainty by computing the local uncertainty exactly more efficiently as the uncertainty in each subdomain can be computed concurrently. The robustness of \$PINN is verified by adding uncorrelated random noise to the training data up to 15% and testing for different domain sizes.
Abstract:The degree of polymerization (DP) is one of the methods for estimating the aging of the polymer based insulation systems, such as cellulose insulation in power components. The main degradation mechanisms in polymers are hydrolysis, pyrolysis, and oxidation. These mechanisms combined cause a reduction of the DP. However, the data availability for these types of problems is usually scarce. This study analyzes insulation aging using cellulose degradation data from power transformers. The aging problem for the cellulose immersed in mineral oil inside power transformers is modeled with ordinary differential equations (ODEs). We recover the governing equations of the degradation system using Physics-Informed Neural Networks (PINNs) and symbolic regression. We apply PINNs to discover the Arrhenius equation's unknown parameters in the Ekenstam ODE describing cellulose contamination content and the material aging process related to temperature for synthetic data and real DP values. A modification of the Ekenstam ODE is given by Emsley's system of ODEs, where the rate constant expressed by the Arrhenius equation decreases in time with the new formulation. We use PINNs and symbolic regression to recover the functional form of one of the ODEs of the system and to identify an unknown parameter.
Abstract:Power transformers are subjected to electrical currents and temperature fluctuations that, if not properly controlled, can lead to major deterioration of their insulation system. Therefore, monitoring the temperature of a power transformer is fundamental to ensure a long-term operational life. Models presented in the IEC 60076-7 and IEEE standards, for example, monitor the temperature by calculating the top-oil and the hot-spot temperatures. However, these models are not very accurate and rely on the power transformers' properties. This paper focuses on finding an alternative method to predict the top-oil temperatures given previous measurements. Given the large quantities of data available, machine learning methods for time series forecasting are analyzed and compared to the real measurements and the corresponding prediction of the IEC standard. The methods tested are Artificial Neural Networks (ANNs), Time-series Dense Encoder (TiDE), and Temporal Convolutional Networks (TCN) using different combinations of historical measurements. Each of these methods outperformed the IEC 60076-7 model and they are extended to estimate the temperature rise over ambient. To enhance prediction reliability, we explore the application of quantile regression to construct prediction intervals for the expected top-oil temperature ranges. The best-performing model successfully estimates conditional quantiles that provide sufficient coverage.
Abstract:Physics-Informed Neural Networks (PINNs) are a powerful deep learning method capable of providing solutions and parameter estimations of physical systems. Given the complexity of their neural network structure, the convergence speed is still limited compared to numerical methods, mainly when used in applications that model realistic systems. The network initialization follows a random distribution of the initial weights, as in the case of traditional neural networks, which could lead to severe model convergence bottlenecks. To overcome this problem, we follow current studies that deal with optimal initial weights in traditional neural networks. In this paper, we use a convex optimization model to improve the initialization of the weights in PINNs and accelerate convergence. We investigate two optimization models as a first training step, defined as pre-training, one involving only the boundaries and one including physics. The optimization is focused on the first layer of the neural network part of the PINN model, while the other weights are randomly initialized. We test the methods using a practical application of the heat diffusion equation to model the temperature distribution of power transformers. The PINN model with boundary pre-training is the fastest converging method at the current stage.