Abstract:This paper targets the problem of encoding information into binary cell assemblies. Spiking neural networks and k-winners-take-all models are two common approaches, but the first is hard to use for information processing and the second is too simple and lacks important features of the first. We present an intermediate model that shares the computational ease of kWTA and has more flexible and richer dynamics. It uses explicit inhibitory neurons to balance and shape excitation through an iterative procedure. This leads to a recurrent interaction between inhibitory and excitatory neurons that better adapts to the input distribution and performs such computations as habituation, decorrelation, and clustering. To show these, we investigate Hebbian-like learning rules and propose a new learning rule for binary weights with multiple stabilization mechanisms. Our source code is publicly available.
Abstract:Here is presented an analysis of an autoencoder with binary activations $\{0, 1\}$ and binary $\{0, 1\}$ random weights. Such set up puts this model at the intersection of different fields: neuroscience, information theory, sparse coding, and machine learning. It is shown that the sparse activation of the hidden layer arises naturally in order to preserve information between layers. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error for any input just by varying the thresholds of neurons. The model preserves the similarity of inputs at the hidden layer that is maximal for the dense hidden layer activation. By analyzing the mutual information between layers it is shown that the difference between sparse and dense representations is related to a memory-computation trade-off. The model is similar to an olfactory perception system of a fruit fly, and the presented theoretical results give useful insights toward understanding more complex neural networks.