Abstract:Many model selection algorithms rely on sparse dictionary learning to provide interpretable and physics-based governing equations. The optimization algorithms typically use a hard thresholding process to enforce sparse activations in the model coefficients by removing library elements from consideration. By introducing an annealing scheme that reactivates a fraction of the removed terms with a cooling schedule, we are able to improve the performance of these sparse learning algorithms. We concentrate on two approaches to the optimization, SINDy, and an alternative using hard thresholding pursuit. We see in both cases that annealing can improve model accuracy. The effectiveness of annealing is demonstrated through comparisons on several nonlinear systems pulled from convective flows, excitable systems, and population dynamics. Finally we apply these algorithms to experimental data for projectile motion.
Abstract:Particle dynamics and multi-agent systems provide accurate dynamical models for studying and forecasting the behavior of complex interacting systems. They often take the form of a high-dimensional system of differential equations parameterized by an interaction kernel that models the underlying attractive or repulsive forces between agents. We consider the problem of constructing a data-based approximation of the interacting forces directly from noisy observations of the paths of the agents in time. The learned interaction kernels are then used to predict the agents behavior over a longer time interval. The approximation developed in this work uses a randomized feature algorithm and a sparse randomized feature approach. Sparsity-promoting regression provides a mechanism for pruning the randomly generated features which was observed to be beneficial when one has limited data, in particular, leading to less overfitting than other approaches. In addition, imposing sparsity reduces the kernel evaluation cost which significantly lowers the simulation cost for forecasting the multi-agent systems. Our method is applied to various examples, including first-order systems with homogeneous and heterogeneous interactions, second order homogeneous systems, and a new sheep swarming system.
Abstract:We provide larger step-size restrictions for which gradient descent based algorithms (almost surely) avoid strict saddle points. In particular, consider a twice differentiable (non-convex) objective function whose gradient has Lipschitz constant L and whose Hessian is well-behaved. We prove that the probability of initial conditions for gradient descent with step-size up to 2/L converging to a strict saddle point, given one uniformly random initialization, is zero. This extends previous results up to the sharp limit imposed by the convex case. In addition, the arguments hold in the case when a learning rate schedule is given, with either a continuous decaying rate or a piece-wise constant schedule.