Incorporating group symmetry directly into the learning process has proved to be an effective guideline for model design. By producing features that are guaranteed to transform covariantly to the group actions on the inputs, group-equivariant convolutional neural networks (G-CNNs) achieve significantly improved generalization performance in learning tasks with intrinsic symmetry. General theory and practical implementation of G-CNNs have been studied for planar images under either rotation or scaling transformation, but only individually. We present, in this paper, a roto-scale-translation equivariant CNN (RST-CNN), that is guaranteed to achieve equivariance jointly over these three groups via coupled group convolutions. Moreover, as symmetry transformations in reality are rarely perfect and typically subject to input deformation, we provide a stability analysis of the equivariance of representation to input distortion, which motivates the truncated expansion of the convolutional filters under (pre-fixed) low-frequency spatial modes. The resulting model provably achieves deformation-robust RST equivariance, i.e., the RST symmetry is still "approximately" preserved when the transformation is "contaminated" by a nuisance data deformation, a property that is especially important for out-of-distribution generalization. Numerical experiments on MNIST, Fashion-MNIST, and STL-10 demonstrate that the proposed model yields remarkable gains over prior arts, especially in the small data regime where both rotation and scaling variations are present within the data.
The Deep Operator Networks~(DeepONet) is a fundamentally different class of neural networks that we train to approximate nonlinear operators, including the solution operator of parametric partial differential equations (PDE). DeepONets have shown remarkable approximation and generalization capabilities even when trained with relatively small datasets. However, the performance of DeepONets deteriorates when the training data is polluted with noise, a scenario that occurs very often in practice. To enable DeepONets training with noisy data, we propose using the Bayesian framework of replica-exchange Langevin diffusion. Such a framework uses two particles, one for exploring and another for exploiting the loss function landscape of DeepONets. We show that the proposed framework's exploration and exploitation capabilities enable (1) improved training convergence for DeepONets in noisy scenarios and (2) attaching an uncertainty estimate for the predicted solutions of parametric PDEs. In addition, we show that replica-exchange Langeving Diffusion (remarkably) also improves the DeepONet's mean prediction accuracy in noisy scenarios compared with vanilla DeepONets trained with state-of-the-art gradient-based optimization algorithms (e.g. Adam). To reduce the potentially high computational cost of replica, in this work, we propose an accelerated training framework for replica-exchange Langevin diffusion that exploits the neural network architecture of DeepONets to reduce its computational cost up to 25% without compromising the proposed framework's performance. Finally, we illustrate the effectiveness of the proposed Bayesian framework using a series of experiments on four parametric PDE problems.
The present study develops a physics-constrained neural network (PCNN) to predict sequential patterns and motions of multiphase flows (MPFs), which includes strong interactions among various fluid phases. To predict the order parameters, which locate individual phases, in the future time, the conditional neural processes and long short-term memory (CNP-LSTM) are applied to quickly infer the dynamics of the phases after encoding only a few observations. After that, the multiphase consistent and conservative boundedness mapping algorithm (MCBOM) is implemented to correct the order parameters predicted from CNP-LSTM in order to strictly satisfy the mass conservation, the summation of the volume fractions of the phases to be unity, the consistency of reduction, and the boundedness of the order parameters. Then, the density of the fluid mixture is updated from the corrected order parameters. Finally, the velocity in the future time is predicted by a physics-informed CNP-LSTM (PICNP-LSTM) where conservation of momentum is included in the loss function with the observed density and velocity as the inputs. The proposed PCNN for MPFs sequentially performs (CNP-LSTM)-(MCBOM)-(PICNP-LSTM), which avoids unphysical behaviors of the order parameters, accelerates the convergence, and requires fewer data to make predictions. Numerical experiments demonstrate that the proposed PCNN is capable of predicting MPFs effectively.
Deep learning-based surrogate modeling is becoming a promising approach for learning and simulating dynamical systems. Deep-learning methods, however, find very challenging learning stiff dynamics. In this paper, we develop DAE-PINN, the first effective deep-learning framework for learning and simulating the solution trajectories of nonlinear differential-algebraic equations (DAE), which present a form of infinite stiffness and describe, for example, the dynamics of power networks. Our DAE-PINN bases its effectiveness on the synergy between implicit Runge-Kutta time-stepping schemes (designed specifically for solving DAEs) and physics-informed neural networks (PINN) (deep neural networks that we train to satisfy the dynamics of the underlying problem). Furthermore, our framework (i) enforces the neural network to satisfy the DAEs as (approximate) hard constraints using a penalty-based method and (ii) enables simulating DAEs for long-time horizons. We showcase the effectiveness and accuracy of DAE-PINN by learning and simulating the solution trajectories of a three-bus power network.
In this work, we propose a robust Bayesian sparse learning algorithm based on Bayesian group Lasso with spike and slab priors for the discovery of partial differential equations with variable coefficients. Using the samples draw from the posterior distribution with a Gibbs sampler, we are able to estimate the values of coefficients, together with their standard errors and confidence intervals. Apart from constructing the error bars, uncertainty quantification can also be employed for designing new criteria of model selection and threshold setting. This enables our method more adjustable and robust in learning equations with time-dependent coefficients. Three criteria are introduced for model selection and threshold setting to identify the correct terms: the root mean square, total error bar, and group error bar. Moreover, three noise filters are integrated with the robust Bayesian sparse learning algorithm for better results with larger noise. Numerical results demonstrate that our method is more robust than sequential grouped threshold ridge regression and group Lasso in noisy situations through three examples.
Simultaneous EEG-fMRI acquisition and analysis technology has been widely used in various research fields of brain science. However, how to remove the ballistocardiogram (BCG) artifacts in this scenario remains a huge challenge. Because it is impossible to obtain clean and BCG-contaminated EEG signals at the same time, BCG artifact removal is a typical unpaired signal-to-signal problem. To solve this problem, this paper proposed a new GAN training model - Single Shot Reversible GAN (SSRGAN). The model is allowing bidirectional input to better combine the characteristics of the two types of signals, instead of using two independent models for bidirectional conversion as in the past. Furthermore, the model is decomposed into multiple independent convolutional blocks with specific functions. Through additional training of the blocks, the local representation ability of the model is improved, thereby improving the overall model performance. Experimental results show that, compared with existing methods, the method proposed in this paper can remove BCG artifacts more effectively and retain the useful EEG information.
We propose an adaptively weighted stochastic gradient Langevin dynamics algorithm (SGLD), so-called contour stochastic gradient Langevin dynamics (CSGLD), for Bayesian learning in big data statistics. The proposed algorithm is essentially a \emph{scalable dynamic importance sampler}, which automatically \emph{flattens} the target distribution such that the simulation for a multi-modal distribution can be greatly facilitated. Theoretically, we prove a stability condition and establish the asymptotic convergence of the self-adapting parameter to a {\it unique fixed-point}, regardless of the non-convexity of the original energy function; we also present an error analysis for the weighted averaging estimators. Empirically, the CSGLD algorithm is tested on multiple benchmark datasets including CIFAR10 and CIFAR100. The numerical results indicate its superiority over the existing state-of-the-art algorithms in training deep neural networks.
Evaluating the mechanical response of fiber-reinforced composites can be extremely time consuming and expensive. Machine learning (ML) techniques offer a means for faster predictions via models trained on existing input-output pairs and have exhibited success in composite research. This paper explores a fully convolutional neural network modified from StressNet, which was originally for lin-ear elastic materials and extended here for a non-linear finite element (FE) simulation to predict the stress field in 2D slices of segmented tomography images of a fiber-reinforced polymer specimen. The network was trained and evaluated on data generated from the FE simulations of the exact microstructure. The testing results show that the trained network accurately captures the characteristics of the stress distribution, especially on fibers, solely from the segmented microstructure images. The trained model can make predictions within seconds in a single forward pass on an ordinary laptop, given the input microstructure, compared to 92.5 hours to run the full FE simulation on a high-performance computing cluster. These results show promise in using ML techniques to conduct fast structural analysis for fiber-reinforced composites and suggest a corollary that the trained model can be used to identify the location of potential damage sites in fiber-reinforced polymers.
Continuous structural health monitoring (SHM) and integrated nondestructive evaluation (NDE) are important for ensuring the safe operation of high-risk engineering structures. Recently, piezoresistive nanocomposite materials have received much attention for SHM and NDE. These materials are self-sensing because their electrical conductivity changes in response to deformation and damage. Combined with electrical impedance tomography (EIT), it is possible to map deleterious effects. However, EIT suffers from important limitations -- it is computationally expensive, provides indistinct information on damage shape, and can miss multiple damages if they are close together. In this article we apply a novel neural network approach to quantify damage metrics such as size, number, and location from EIT data. This network is trained using a simulation routine calibrated to experimental data for a piezoresistive carbon nanofiber-modified epoxy. Our results show that the network can predict the number of damages with 99.2% accuracy, quantify damage size with respect to the averaged radius at an average of 2.46% error, and quantify damage position with respect to the domain length at an average of 0.89% error. These results are an important first step in translating the combination of self-sensing materials and EIT to real-world SHM and NDE.
Bayesian approaches have been successfully integrated into training deep neural networks. One popular family is stochastic gradient Markov chain Monte Carlo methods (SG-MCMC), which have gained increasing interest due to their scalability to handle large datasets and the ability to avoid overfitting. Although standard SG-MCMC methods have shown great performance in a variety of problems, they may be inefficient when the random variables in the target posterior densities have scale differences or are highly correlated. In this work, we present an adaptive Hessian approximated stochastic gradient MCMC method to incorporate local geometric information while sampling from the posterior. The idea is to apply stochastic approximation to sequentially update a preconditioning matrix at each iteration. The preconditioner possesses second-order information and can guide the random walk of a sampler efficiently. Instead of computing and saving the full Hessian of the log posterior, we use limited memory of the sample and their stochastic gradients to approximate the inverse Hessian-vector multiplication in the updating formula. Moreover, by smoothly optimizing the preconditioning matrix, our proposed algorithm can asymptotically converge to the target distribution with a controllable bias under mild conditions. To reduce the training and testing computational burden, we adopt a magnitude-based weight pruning method to enforce the sparsity of the network. Our method is user-friendly and is scalable to standard SG-MCMC updating rules by implementing an additional preconditioner. The sparse approximation of inverse Hessian alleviates storage and computational complexities for large dimensional models. The bias introduced by stochastic approximation is controllable and can be analyzed theoretically. Numerical experiments are performed on several problems.