Abstract:We show that nonequilibrium dynamics can play a constructive role in unsupervised machine learning by inducing the spontaneous emergence of latent-state cycles. We introduce a model in which visible and hidden variables interact through two independently parametrized transition matrices, defining a Markov chain whose steady state is intrinsically out of equilibrium. Likelihood maximization drives this system toward nonequilibrium steady states with finite entropy production, reduced self-transition probabilities, and persistent probability currents in the latent space. These cycles are not imposed by the architecture but arise from training, and models that develop them avoid the low-log-likelihood regime associated with nearly reversible dynamics while more faithfully reproducing the empirical distribution of data classes. Compared with equilibrium approaches such as restricted Boltzmann machines, our model breaks the detailed balance between the forward and backward conditional transitions and relies on a log-likelihood gradient that depends explicitly on the last two steps of the Markov chain. Hence, this exploration of the interface between nonequilibrium statistical physics and modern machine learning suggests that introducing irreversibility into latent-variable models can enhance generative performance.
Abstract:We present a machine learning approach for the aftershock forecasting of Japanese earthquake catalogue from 2015 to 2019. Our method takes as sole input the ground surface deformation as measured by Global Positioning System (GPS) stations at the day of the mainshock, and processes it with a Convolutional Neural Network (CNN), thus capturing the input's spatial correlations. Despite the moderate amount of data the performance of this new approach is very promising. The accuracy of the prediction heavily relies on the density of GPS stations: the predictive power is lost when the mainshocks occur far from measurement stations, as in offshore regions.