Alert button
Picture for Bryan Tower

Bryan Tower

Alert button

Learning without gradient descent encoded by the dynamics of a neurobiological model

Mar 23, 2021
Vivek Kurien George, Vikash Morar, Weiwei Yang, Jonathan Larson, Bryan Tower, Shweti Mahajan, Arkin Gupta, Christopher White, Gabriel A. Silva

Figure 1 for Learning without gradient descent encoded by the dynamics of a neurobiological model
Figure 2 for Learning without gradient descent encoded by the dynamics of a neurobiological model
Figure 3 for Learning without gradient descent encoded by the dynamics of a neurobiological model

The success of state-of-the-art machine learning is essentially all based on different variations of gradient descent algorithms that minimize some version of a cost or loss function. A fundamental limitation, however, is the need to train these systems in either supervised or unsupervised ways by exposing them to typically large numbers of training examples. Here, we introduce a fundamentally novel conceptual approach to machine learning that takes advantage of a neurobiologically derived model of dynamic signaling, constrained by the geometric structure of a network. We show that MNIST images can be uniquely encoded and classified by the dynamics of geometric networks with nearly state-of-the-art accuracy in an unsupervised way, and without the need for any training.

* Version 2 includes a new subsection 4.1 and associated table and figure benchmarking our biologically-inspired neural network against a traditional ANN 
Viaarxiv icon

A general approach to progressive learning

Apr 28, 2020
Joshua T. Vogelstein, Hayden S. Helm, Ronak D. Mehta, Jayanta Dey, Weiwei Yang, Bryan Tower, Will LeVine, Jonathan Larson, Chris White, Carey E. Priebe

Figure 1 for A general approach to progressive learning
Figure 2 for A general approach to progressive learning
Figure 3 for A general approach to progressive learning
Figure 4 for A general approach to progressive learning

In biological learning, data is used to improve performance on the task at hand, while simultaneously improving performance on both previously encountered tasks and as yet unconsidered future tasks. In contrast, classical machine learning starts from a blank slate, or tabula rasa, using data only for the single task at hand. While typical transfer learning algorithms can improve performance on future tasks, their performance degrades upon learning new tasks. Many recent approaches have attempted to mitigate this issue, called catastrophic forgetting, to maintain performance given new tasks. But striving to avoid forgetting sets the goal unnecessarily low: the goal of progressive learning, whether biological or artificial, is to improve performance on all tasks (including past and future) with any new data. We propose a general approach to progressive learning that ensembles representations, rather than learners. We show that ensembling representations---including representations learned by decision forests or neural networks---enables both forward and backward transfer on a variety of simulated and real data tasks, including vision, language, and adversarial tasks. This work suggests that further improvements in progressive learning may follow from a deeper understanding of how biological learning achieves such high degrees of efficiency.

Viaarxiv icon