Abstract:Building on recent advances in scientific machine learning and generative modeling for computational fluid dynamics, we propose a conditional score-based diffusion model designed for multi-scenarios fluid flow prediction. Our model integrates an energy constraint rooted in the statistical properties of turbulent flows, improving prediction quality with minimal training, while enabling efficient sampling at low cost. The method features a simple and general architecture that requires no problem-specific design, supports plug-and-play enhancements, and enables fast and flexible solution generation. It also demonstrates an efficient conditioning mechanism that simplifies training across different scenarios without demanding a redesign of existing models. We further explore various stochastic differential equation formulations to demonstrate how thoughtful design choices enhance performance. We validate the proposed methodology through extensive experiments on complex fluid dynamics datasets encompassing a variety of flow regimes and configurations. Results demonstrate that our model consistently achieves stable, robust, and physically faithful predictions, even under challenging turbulent conditions. With properly tuned parameters, it achieves accurate results across multiple scenarios while preserving key physical and statistical properties. We present a comprehensive analysis of stochastic differential equation impact and discuss our approach across diverse fluid mechanics tasks.
Abstract:This paper introduces algorithms to select/design kernels in Gaussian process regression/kriging surrogate modeling techniques. We adopt the setting of kernel method solutions in ad hoc functional spaces, namely Reproducing Kernel Hilbert Spaces (RKHS), to solve the problem of approximating a regular target function given observations of it, i.e. supervised learning. A first class of algorithms is kernel flow, which was introduced in a context of classification in machine learning. It can be seen as a nested cross-validation procedure whereby a "best" kernel is selected such that the loss of accuracy incurred by removing some part of the dataset (typically half of it) is minimized. A second class of algorithms is called spectral kernel ridge regression, and aims at selecting a "best" kernel such that the norm of the function to be approximated is minimal in the associated RKHS. Within Mercer's theorem framework, we obtain an explicit construction of that "best" kernel in terms of the main features of the target function. Both approaches of learning kernels from data are illustrated by numerical examples on synthetic test functions, and on a classical test case in turbulence modeling validation for transonic flows about a two-dimensional airfoil.