Abstract:This paper addresses the problem of learning reaction-diffusion (RD) systems from data while ensuring physical consistency and well-posedness of the learned models. Building on a regularization-based framework for structured model learning, we focus on learning parameterized reaction terms and investigate how to incorporate key physical properties, such as mass conservation and quasipositivity, directly into the learning process. Our main contributions are twofold: First, we propose techniques to systematically modify a given class of parameterized reaction terms such that the resulting terms inherently satisfy mass conservation and quasipositivity, ensuring that the learned RD systems preserve non-negativity and adhere to physical principles. These modifications also guarantee well-posedness of the resulting PDEs under additional regularity and growth conditions. Second, we extend existing theoretical results on regularization-based model learning to RD systems using these physically consistent reaction terms. Specifically, we prove that solutions to the learning problem converge to a unique, regularization-minimizing solution of a limit system even when conservation laws and quasipositivity are enforced. In addition, we provide approximation results for quasipositive functions, essential for constructing physically consistent parameterizations. These results advance the development of interpretable and reliable data-driven models for RD systems that align with fundamental physical laws.
Abstract:We show that suitably regular functions can be approximated in the $\mathcal{C}^1$-norm both with rational functions and rational neural networks, including approximation rates with respect to width and depth of the network, and degree of the rational functions. As consequence of our results, we further obtain $\mathcal{C}^1$-approximation results for rational neural networks with the $\text{EQL}^\div$ and ParFam architecture, both of which are important in particular in the context of symbolic regression for physical law learning.
Abstract:This paper addresses the problem of uniqueness in learning physical laws for systems of partial differential equations (PDEs). Contrary to most existing approaches, it considers a framework of structured model learning, where existing, approximately correct physical models are augmented with components that are learned from data. The main result of the paper is a uniqueness result that covers a large class of PDEs and a suitable class of neural networks used for approximating the unknown model components. The uniqueness result shows that, in the idealized setting of full, noiseless measurements, a unique identification of the unknown model components is possible as regularization-minimizing solution of the PDE system. Furthermore, the paper provides a convergence result showing that model components learned on the basis of incomplete, noisy measurements approximate the ground truth model component in the limit. These results are possible under specific properties of the approximating neural networks and due to a dedicated choice of regularization. With this, a practical contribution of this analytic paper is to provide a class of model learning frameworks different to standard settings where uniqueness can be expected in the limit of full measurements.




Abstract:This work focuses on the analysis of fully connected feed forward ReLU neural networks as they approximate a given, smooth function. In contrast to conventionally studied universal approximation properties under increasing architectures, e.g., in terms of width or depth of the networks, we are concerned with the asymptotic growth of the parameters of approximating networks. Such results are of interest, e.g., for error analysis or consistency results for neural network training. The main result of our work is that, for a ReLU architecture with state of the art approximation error, the realizing parameters grow at most polynomially. The obtained rate with respect to a normalized network size is compared to existing results and is shown to be superior in most cases, in particular for high dimensional input.