Abstract:Predicting multiphysics dynamics is computationally expensive and challenging due to the severe coupling of multi-scale, heterogeneous physical processes. While neural surrogates promise a paradigm shift, the field currently suffers from an "illusion of mastery", as repeatedly emphasized in top-tier commentaries: existing evaluations overly rely on simplified, low-dimensional proxies, which fail to expose the models' inherent fragility in realistic regimes. To bridge this critical gap, we present REALM (REalistic AI Learning for Multiphysics), a rigorous benchmarking framework designed to test neural surrogates on challenging, application-driven reactive flows. REALM features 11 high-fidelity datasets spanning from canonical multiphysics problems to complex propulsion and fire safety scenarios, alongside a standardized end-to-end training and evaluation protocol that incorporates multiphysics-aware preprocessing and a robust rollout strategy. Using this framework, we systematically benchmark over a dozen representative surrogate model families, including spectral operators, convolutional models, Transformers, pointwise operators, and graph/mesh networks, and identify three robust trends: (i) a scaling barrier governed jointly by dimensionality, stiffness, and mesh irregularity, leading to rapidly growing rollout errors; (ii) performance primarily controlled by architectural inductive biases rather than parameter count; and (iii) a persistent gap between nominal accuracy metrics and physically trustworthy behavior, where models with high correlations still miss key transient structures and integral quantities. Taken together, REALM exposes the limits of current neural surrogates on realistic multiphysics flows and offers a rigorous testbed to drive the development of next-generation physics-aware architectures.
Abstract:We present EngineBench, the first machine learning (ML) oriented database to use high quality experimental data for the study of turbulent flows inside combustion machinery. Prior datasets for ML in fluid mechanics are synthetic or use overly simplistic geometries. EngineBench is comprised of real-world particle image velocimetry (PIV) data that captures the turbulent airflow patterns in a specially-designed optical engine. However, in PIV data from internal flows, such as from engines, it is often challenging to achieve a full field of view and large occlusions can be present. In order to design optimal combustion systems, insight into the turbulent flows in these obscured areas is needed, which can be provided via inpainting models. Here we propose a novel inpainting task using random edge gaps, a technique that emphasises realism by introducing occlusions at random sizes and orientations at the edges of the PIV images. We test five ML methods on random edge gaps using pixel-wise, vector-based, and multi-scale performance metrics. We find that UNet-based models are more accurate than the industry-norm non-parametric approach and the context encoder at this task on both small and large gap sizes. The dataset and inpainting task presented in this paper support the development of more general-purpose pre-trained ML models for engine design problems. The method comparisons allow for more informed selection of ML models for problems in experimental flow diagnostics. All data and code are publicly available at https://eng.ox.ac.uk/tpsrg/research/enginebench/.