Abstract:Many engineering tasks require solving families of nonlinear constrained optimization problems, parametrized in setting-specific variables. This is computationally demanding, particularly, if solutions have to be computed across strongly varying parameter values, e.g., in real-time control or for model-based design. Thus, we propose to learn the mapping from parameters to the primal optimal solutions and to their corresponding duals using neural networks, giving a dense estimation in contrast to gridded approaches. Our approach, Optimality-informed Neural Networks (OptINNs), combines (i) a KKT-residual loss that penalizes violations of the first-order optimality conditions under standard constraint qualifications assumptions, and (ii) problem-specific output activations that enforce simple inequality constraints (e.g., box-type/positivity) by construction. This design reduces data requirements, allows the prediction of dual variables, and improves feasibility and closeness to optimality compared to penalty-only training. Taking quadratic penalties as a baseline, since this approach has been previously proposed for the considered problem class in literature, our method simplifies hyperparameter tuning and attains tighter adherence to optimality conditions. We evaluate OptINNs on different nonlinear optimization problems ranging from low to high dimensions. On small problems, OptINNs match a quadratic-penalty baseline in primal accuracy while additionally predicting dual variables with low error. On larger problems, OptINNs achieve lower constraint violations and lower primal error compared to neural networks based on the quadratic-penalty method. These results suggest that embedding feasibility and optimality into the network architecture and loss can make learning-based surrogates more accurate, feasible, and data-efficient for parametric optimization.
Abstract:Numerous applications necessitate the computation of numerical solutions to differential equations across a wide range of initial conditions and system parameters, which feeds the demand for efficient yet accurate numerical integration methods.This study proposes a neural network (NN) enhancement of classical numerical integrators. NNs are trained to learn integration errors, which are then used as additive correction terms in numerical schemes. The performance of these enhanced integrators is compared with well-established methods through numerical studies, with a particular emphasis on computational efficiency. Analytical properties are examined in terms of local errors and backward error analysis. Embedded Runge-Kutta schemes are then employed to develop enhanced integrators that mitigate generalization risk, ensuring that the neural network's evaluation in previously unseen regions of the state space does not destabilize the integrator. It is guaranteed that the enhanced integrators perform at least as well as the desired classical Runge-Kutta schemes. The effectiveness of the proposed approaches is demonstrated through extensive numerical studies using a realistic model of a wind turbine, with parameters derived from the established simulation framework OpenFast.




Abstract:Algebraic differentiators have attracted much interest in recent years. Their simple implementation as classical finite impulse response digital filters and systematic tuning guidelines may help to solve challenging problems, including, but not limited to, nonlinear feedback control, model-free control, and fault diagnosis. This contribution introduces the open source toolbox AlgDiff for the design, analysis and discretisation of algebraic differentiators.