Abstract:The need for large amounts of training data in modern machine learning is one of the biggest challenges of the field. Compared to the brain, current artificial algorithms are much less capable of learning invariance transformations and employing them to extrapolate knowledge from small sample sets. It has recently been proposed that the brain might encode perceptual invariances as approximate graph symmetries in the network of synaptic connections. Such symmetries may arise naturally through a biologically plausible process of unsupervised Hebbian learning. In the present paper, we illustrate this proposal on numerical examples, showing that invariance transformations can indeed be recovered from the structure of recurrent synaptic connections which form within a layer of feature detector neurons via a simple Hebbian learning rule. In order to numerically recover the invariance transformations from the resulting recurrent network, we develop a general algorithmic framework for finding approximate graph automorphisms. We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
Abstract:A variety of behaviors like spatial navigation or bodily motion can be formulated as graph traversal problems through cognitive maps. We present a neural network model which can solve such tasks and is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus. The neurons and synaptic connections in the model represent structures that can result from self-organization into a cognitive map via Hebbian learning, i.e. into a graph in which each neuron represents a point of some abstract task-relevant manifold and the recurrent connections encode a distance metric on the manifold. Graph traversal problems are solved by wave-like activation patterns which travel through the recurrent network and guide a localized peak of activity onto a path from some starting position to a target state.