Abstract:LLMs have shown strong in-context learning (ICL) abilities, but have not yet been extended to signal processing systems. Inspired by their design, we have proposed for the first time ICL using transformer models applicable to motor feedforward control, a critical task where classical PI and physics-based methods struggle with nonlinearities and complex load conditions. We propose a transformer based model architecture that separates signal representation from system behavior, enabling both few-shot finetuning and one-shot ICL. Pretrained on a large corpus of synthetic linear and nonlinear systems, the model learns to generalize to unseen system dynamics of real-world motors only with a handful of examples. In experiments, our approach generalizes across multiple motor load configurations, transforms untuned examples into accurate feedforward predictions, and outperforms PI controllers and physics-based feedforward baselines. These results demonstrate that ICL can bridge synthetic pretraining and real-world adaptability, opening new directions for data efficient control of physical systems.
Abstract:MATLAB(R) releases over the last 3 years have witnessed a continuing growth in the dynamic modeling capabilities offered by the System Identification Toolbox(TM). The emphasis has been on integrating deep learning architectures and training techniques that facilitate the use of deep neural networks as building blocks of nonlinear models. The toolbox offers neural state-space models which can be extended with auto-encoding features that are particularly suited for reduced-order modeling of large systems. The toolbox contains several other enhancements that deepen its integration with the state-of-art machine learning techniques, leverage auto-differentiation features for state estimation, and enable a direct use of raw numeric matrices and timetables for training models.