Abstract:The transformer architecture has demonstrated remarkable capabilities in modern artificial intelligence, among which the capability of implicitly learning an internal model during inference time is widely believed to play a key role in the under standing of pre-trained large language models. However, most recent works have been focusing on studying supervised learning topics such as in-context learning, leaving the field of unsupervised learning largely unexplored. This paper investigates the capabilities of transformers in solving Gaussian Mixture Models (GMMs), a fundamental unsupervised learning problem through the lens of statistical estimation. We propose a transformer-based learning framework called TGMM that simultaneously learns to solve multiple GMM tasks using a shared transformer backbone. The learned models are empirically demonstrated to effectively mitigate the limitations of classical methods such as Expectation-Maximization (EM) or spectral algorithms, at the same time exhibit reasonable robustness to distribution shifts. Theoretically, we prove that transformers can approximate both the EM algorithm and a core component of spectral methods (cubic tensor power iterations). These results bridge the gap between practical success and theoretical understanding, positioning transformers as versatile tools for unsupervised learning.
Abstract:Temporal point process (TPP) is an important tool for modeling and predicting irregularly timed events across various domains. Recently, the recurrent neural network (RNN)-based TPPs have shown practical advantages over traditional parametric TPP models. However, in the current literature, it remains nascent in understanding neural TPPs from theoretical viewpoints. In this paper, we establish the excess risk bounds of RNN-TPPs under many well-known TPP settings. We especially show that an RNN-TPP with no more than four layers can achieve vanishing generalization errors. Our technical contributions include the characterization of the complexity of the multi-layer RNN class, the construction of $\tanh$ neural networks for approximating dynamic event intensity functions, and the truncation technique for alleviating the issue of unbounded event sequences. Our results bridge the gap between TPP's application and neural network theory.
Abstract:We present a novel controller design on a robotic locomotor that combines an aerial vehicle with a spring-loaded leg. The main motivation is to enable the terrestrial locomotion capability on aerial vehicles so that they can carry heavy loads: heavy enough that flying is no longer possible, e.g., when the thrust-to-weight ratio (TWR) is small. The robot is designed with a pogo-stick leg and a quadrotor, and thus it is named as PogoX. We show that with a simple and lightweight spring-loaded leg, the robot is capable of hopping with TWR $<1$. The control of hopping is realized via two components: a vertical height control via control Lyapunov function-based energy shaping, and a step-to-step (S2S) dynamics based horizontal velocity control that is inspired by the hopping of the Spring-Loaded Inverted Pendulum (SLIP). The controller is successfully realized on the physical robot, showing dynamic terrestrial locomotion of PogoX which can hop at variable heights and different horizontal velocities with robustness to ground height variations and external pushes.