Abstract:Dictionary learning for sparse linear coding has exposed characteristic properties of natural signals. However, a universal theorem guaranteeing the consistency of estimation in this model is lacking. Here, we prove that for all diverse enough datasets generated from the sparse coding model, latent dictionaries and codes are uniquely and stably determined up to measurement error. Applications are given to data analysis, engineering, and neuroscience.