Abstract:In our earlier work, we introduced the principle of Tuning without Forgetting (TwF) for sequential training of neural ODEs, where training samples are added iteratively and parameters are updated within the subspace of control functions that preserves the end-point mapping at previously learned samples on the manifold of output labels in the first-order approximation sense. In this letter, we prove that this parameter subspace forms a Banach submanifold of finite codimension under nonsingular controls, and we characterize its tangent space. This reveals that TwF corresponds to a continuation/deformation of the control function along the tangent space of this Banach submanifold, providing a theoretical foundation for its mapping-preserving (not forgetting) during the sequential training exactly, beyond first-order approximation.
Abstract:Given a training set in the form of a paired $(\mathcal{X},\mathcal{Y})$, we say that the control system $\dot{x} = f(x,u)$ has learned the paired set via the control $u^*$ if the system steers each point of $\mathcal{X}$ to its corresponding target in $\mathcal{Y}$. Most existing methods for finding a control function $u^*$ require learning of a new control function if the training set is updated. To overcome this limitation, we introduce the concept of $\textit{tuning without forgetting}$. We develop $\textit{an iterative algorithm}$ to tune the control function $u^*$ when the training set expands, whereby points already in the paired set are still matched, and new training samples are learned. More specifically, at each update of our method, the control $u^*$ is projected onto the kernel of the end-point mapping generated by the controlled dynamics at the learned samples. It ensures keeping the end points for the previously learned samples constant while iteratively learning additional samples. Our work contributes to the scalability of control methods, offering a novel approach to adaptively handle training set expansions.