Abstract:The early stopping strategy consists in stopping the training process of a neural network (NN) on a set $S$ of input data before training error is minimal. The advantage is that the NN then retains good generalization properties, i.e. it gives good predictions on data outside $S$, and a good estimate of the statistical error (``population loss'') is obtained. We give here an analytical estimation of the optimal stopping time involving basically the initial training error vector and the eigenvalues of the ``neural tangent kernel''. This yields an upper bound on the population loss which is well-suited to the underparameterized context (where the number of parameters is moderate compared with the number of data). Our method is illustrated on the example of an NN simulating the MPC control of a Van der Pol oscillator.
Abstract:Many dynamical systems are subjected to stochastic influences, such as random excitations, noise, and unmodeled behavior. Tracking the system's state and parameters based on a physical model is a common task for which filtering algorithms, such as Kalman filters and their non-linear extensions, are typically used. However, many of these filters use assumptions on the transition probabilities or the covariance model, which can lead to inaccuracies in non-linear systems. We will show the application of a stochastic coupling filter that can approximate arbitrary transition densities under non-Gaussian noise. The filter is based on transport maps, which couple the approximation densities to a user-chosen reference density, allowing for straightforward sampling and evaluation of probabilities.