



Abstract:Many learning problems involve symmetries, and while invariance can be built into neural architectures, it can also emerge implicitly when training on group-structured data. We study this phenomenon in classical Hopfield networks and show they can infer the full isomorphism class of a graph from a small random sample. Our results reveal that: (i) graph isomorphism classes can be represented within a three-dimensional invariant subspace, (ii) using gradient descent to minimize energy flow (MEF) has an implicit bias toward norm-efficient solutions, which underpins a polynomial sample complexity bound for learning isomorphism classes, and (iii) across multiple learning rules, parameters converge toward the invariant subspace as sample sizes grow. Together, these findings highlight a unifying mechanism for generalization in Hopfield networks: a bias toward norm efficiency in learning drives the emergence of approximate invariance under group-structured data.
Abstract:Dictionary learning for sparse linear coding has exposed characteristic properties of natural signals. However, a universal theorem guaranteeing the consistency of estimation in this model is lacking. Here, we prove that for all diverse enough datasets generated from the sparse coding model, latent dictionaries and codes are uniquely and stably determined up to measurement error. Applications are given to data analysis, engineering, and neuroscience.