



Abstract:Inertial confinement fusion (ICF) experiments are designed using computer simulations that are approximations of reality, and therefore must be calibrated to accurately predict experimental observations. In this work, we propose a novel nonlinear technique for calibrating from simulations to experiments, or from low fidelity simulations to high fidelity simulations, via "transfer learning". Transfer learning is a commonly used technique in the machine learning community, in which models trained on one task are partially retrained to solve a separate, but related task, for which there is a limited quantity of data. We introduce the idea of hierarchical transfer learning, in which neural networks trained on low fidelity models are calibrated to high fidelity models, then to experimental data. This technique essentially bootstraps the calibration process, enabling the creation of models which predict high fidelity simulations or experiments with minimal computational cost. We apply this technique to a database of ICF simulations and experiments carried out at the Omega laser facility. Transfer learning with deep neural networks enables the creation of models that are more predictive of Omega experiments than simulations alone. The calibrated models accurately predict future Omega experiments, and are used to search for new, optimal implosion designs.




Abstract:In this work, sequence-to-sequence (seq2seq) models, originally developed for language translation, are used to predict the temporal evolution of complex, multi-physics computer simulations. The predictive performance of seq2seq models is compared to state transition models for datasets generated with multi-physics codes with varying levels of complexity - from simple 1D diffusion calculations to simulations of inertial confinement fusion implosions. Seq2seq models demonstrate the ability to accurately emulate complex systems, enabling the rapid estimation of the evolution of quantities of interest in computationally expensive simulations.




Abstract:In this work a novel, automated process for constructing and initializing deep feed-forward neural networks based on decision trees is presented. The proposed algorithm maps a collection of decision trees trained on the data into a collection of initialized neural networks, with the structures of the networks determined by the structures of the trees. The tree-informed initialization acts as a warm-start to the neural network training process, resulting in efficiently trained, accurate networks. These models, referred to as "deep jointly-informed neural networks" (DJINN), demonstrate high predictive performance for a variety of regression and classification datasets, and display comparable performance to Bayesian hyper-parameter optimization at a lower computational cost. By combining the user-friendly features of decision tree models with the flexibility and scalability of deep neural networks, DJINN is an attractive algorithm for training predictive models on a wide range of complex datasets.