Alert button
Picture for Nancy Lynch

Nancy Lynch

Alert button

Learning Hierarchically-Structured Concepts II: Overlapping Concepts, and Networks With Feedback

Apr 19, 2023
Nancy Lynch, Frederik Mallmann-Trenn

Figure 1 for Learning Hierarchically-Structured Concepts II: Overlapping Concepts, and Networks With Feedback

We continue our study from Lynch and Mallmann-Trenn (Neural Networks, 2021), of how concepts that have hierarchical structure might be represented in brain-like neural networks, how these representations might be used to recognize the concepts, and how these representations might be learned. In Lynch and Mallmann-Trenn (Neural Networks, 2021), we considered simple tree-structured concepts and feed-forward layered networks. Here we extend the model in two ways: we allow limited overlap between children of different concepts, and we allow networks to include feedback edges. For these more general cases, we describe and analyze algorithms for recognition and algorithms for learning.

Viaarxiv icon

A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density

Dec 05, 2022
Grace Cai, Noble Harasha, Nancy Lynch

Figure 1 for A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density
Figure 2 for A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density
Figure 3 for A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density
Figure 4 for A Comparison of New Swarm Task Allocation Algorithms in Unknown Environments with Varying Task Density

Task allocation is an important problem for robot swarms to solve, allowing agents to reduce task completion time by performing tasks in a distributed fashion. Existing task allocation algorithms often assume prior knowledge of task location and demand or fail to consider the effects of the geometric distribution of tasks on the completion time and communication cost of the algorithms. In this paper, we examine an environment where agents must explore and discover tasks with positive demand and successfully assign themselves to complete all such tasks. We propose two new task allocation algorithms for initially unknown environments -- one based on N-site selection and the other on virtual pheromones. We analyze each algorithm separately and also evaluate the effectiveness of the two algorithms in dense vs. sparse task distributions. Compared to the Levy walk, which has been theorized to be optimal for foraging, our virtual pheromone inspired algorithm is much faster in sparse to medium task densities but is communication and agent intensive. Our site selection inspired algorithm also outperforms Levy walk in sparse task densities and is a less resource-intensive option than our virtual pheromone algorithm for this case. Because the performance of both algorithms relative to random walk is dependent on task density, our results shed light on how task density is important in choosing a task allocation algorithm in initially unknown environments.

* 10 pages, 9 figures 
Viaarxiv icon

A superconducting nanowire spiking element for neural networks

Jul 29, 2020
Emily Toomey, Ken Segall, Matteo Castellani, Marco Colangelo, Nancy Lynch, Karl K. Berggren

Figure 1 for A superconducting nanowire spiking element for neural networks
Figure 2 for A superconducting nanowire spiking element for neural networks
Figure 3 for A superconducting nanowire spiking element for neural networks
Figure 4 for A superconducting nanowire spiking element for neural networks

As the limits of traditional von Neumann computing come into view, the brain's ability to communicate vast quantities of information using low-power spikes has become an increasing source of inspiration for alternative architectures. Key to the success of these largescale neural networks is a power-efficient spiking element that is scalable and easily interfaced with traditional control electronics. In this work, we present a spiking element fabricated from superconducting nanowires that has pulse energies on the order of ~10 aJ. We demonstrate that the device reproduces essential characteristics of biological neurons, such as a refractory period and a firing threshold. Through simulations using experimentally measured device parameters, we show how nanowire-based networks may be used for inference in image recognition, and that the probabilistic nature of nanowire switching may be exploited for modeling biological processes and for applications that rely on stochasticity.

* 5 main figures; 7 supplemental figures 
Viaarxiv icon

Learning Hierarchically Structured Concepts

Sep 10, 2019
Nancy Lynch, Frederik Mallmann-Trenn

Figure 1 for Learning Hierarchically Structured Concepts
Figure 2 for Learning Hierarchically Structured Concepts
Figure 3 for Learning Hierarchically Structured Concepts
Figure 4 for Learning Hierarchically Structured Concepts

We study the question of how concepts that have structure get represented in the brain. Specifically, we introduce a model for hierarchically structured concepts and we show how a biologically plausible neural network can recognize these concepts, and how it can learn them in the first place. Our main goal is to introduce a general framework for these tasks and prove formally how both (recognition and learning) can be achieved. We show that both tasks can be accomplished even in presence of noise. For learning, we analyze Oja's rule formally, a well-known biologically-plausible rule for adjusting the weights of synapses. We complement the learning results with lower bounds asserting that, in order to recognize concepts of a certain hierarchical depth, neural networks must have a corresponding number of layers.

Viaarxiv icon

Winner-Take-All Computation in Spiking Neural Networks

Apr 25, 2019
Nancy Lynch, Cameron Musco, Merav Parter

Figure 1 for Winner-Take-All Computation in Spiking Neural Networks
Figure 2 for Winner-Take-All Computation in Spiking Neural Networks
Figure 3 for Winner-Take-All Computation in Spiking Neural Networks
Figure 4 for Winner-Take-All Computation in Spiking Neural Networks

In this work we study biological neural networks from an algorithmic perspective, focusing on understanding tradeoffs between computation time and network complexity. Our goal is to abstract real neural networks in a way that, while not capturing all interesting features, preserves high-level behavior and allows us to make biologically relevant conclusions. Towards this goal, we consider the implementation of algorithmic primitives in a simple yet biologically plausible model of $stochastic\ spiking\ neural\ networks$. In particular, we show how the stochastic behavior of neurons in this model can be leveraged to solve a basic $symmetry-breaking\ task$ in which we are given neurons with identical firing rates and want to select a distinguished one. In computational neuroscience, this is known as the winner-take-all (WTA) problem, and it is believed to serve as a basic building block in many tasks, e.g., learning, pattern recognition, and clustering. We provide efficient constructions of WTA circuits in our stochastic spiking neural network model, as well as lower bounds in terms of the number of auxiliary neurons required to drive convergence to WTA in a given number of steps. These lower bounds demonstrate that our constructions are near-optimal in some cases. This work covers and gives more in-depth proofs of a subset of results originally published in [LMP17a]. It is adapted from the last chapter of C. Musco's Ph.D. thesis [Mus18].

Viaarxiv icon

Integrating Temporal Information to Spatial Information in a Neural Circuit

Mar 01, 2019
Mien Brabeeba Wang, Nancy Lynch

Figure 1 for Integrating Temporal Information to Spatial Information in a Neural Circuit
Figure 2 for Integrating Temporal Information to Spatial Information in a Neural Circuit
Figure 3 for Integrating Temporal Information to Spatial Information in a Neural Circuit
Figure 4 for Integrating Temporal Information to Spatial Information in a Neural Circuit

In this paper, we consider a network of spiking neurons with a deterministic synchronous firing rule at discrete time. We propose three problems -- "first consecutive spikes counting", "total spikes counting" and "$k$-spikes temporal to spatial encoding" -- to model how brains extract temporal information into spatial information from different neural codings. For a max input length $T$, we design three networks that solve these three problems with matching lower bounds in both time $O(T)$ and number of neurons $O(\log T)$ in all three questions.

Viaarxiv icon

A Basic Compositional Model for Spiking Neural Networks

Aug 12, 2018
Nancy Lynch, Cameron Musco

Figure 1 for A Basic Compositional Model for Spiking Neural Networks
Figure 2 for A Basic Compositional Model for Spiking Neural Networks
Figure 3 for A Basic Compositional Model for Spiking Neural Networks
Figure 4 for A Basic Compositional Model for Spiking Neural Networks

This paper is part of a project on developing an algorithmic theory of brain networks, based on stochastic Spiking Neural Network (SNN) models. Inspired by tasks that seem to be solved in actual brains, we are defining abstract problems to be solved by these networks. In our work so far, we have developed models and algorithms for the Winner-Take-All problem from computational neuroscience [LMP17a,Mus18], and problems of similarity detection and neural coding [LMP17b]. We plan to consider many other problems and networks, including both static networks and networks that learn. This paper is about basic theory for the stochastic SNN model. In particular, we define a simple version of the model. This version assumes that the neurons' only state is a Boolean, indicating whether the neuron is firing or not. In later work, we plan to develop variants of the model with more elaborate state. We also define an external behavior notion for SNNs, which can be used for stating requirements to be satisfied by the networks. We then define a composition operator for SNNs. We prove that our external behavior notion is "compositional", in the sense that the external behavior of a composed network depends only on the external behaviors of the component networks. We also define a hiding operator that reclassifies some output behavior of an SNN as internal. We give basic results for hiding. Finally, we give a formal definition of a problem to be solved by an SNN, and give basic results showing how composition and hiding of networks affect the problems that they solve. We illustrate our definitions with three examples: building a circuit out of gates, building an "Attention" network out of a "Winner-Take-All" network and a "Filter" network, and a toy example involving combining two networks in a cyclic fashion.

Viaarxiv icon

Collaboratively Learning the Best Option, Using Bounded Memory

Mar 06, 2018
Lili Su, Martin Zubeldia, Nancy Lynch

Figure 1 for Collaboratively Learning the Best Option, Using Bounded Memory

We consider multi-armed bandit problems in social groups wherein each individual has bounded memory and shares the common goal of learning the best arm/option. We say an individual learns the best option if eventually (as $t \to \infty$) it pulls only the arm with the highest average reward. While this goal is provably impossible for an isolated individual, we show that, in social groups, this goal can be achieved easily with the aid of social persuasion, i.e., communication. Specifically, we study the learning dynamics wherein an individual sequentially decides on which arm to pull next based on not only its private reward feedback but also the suggestions provided by randomly chosen peers. Our learning dynamics are hard to analyze via explicit probabilistic calculations due to the stochastic dependency induced by social interaction. Instead, we employ the mean-field approximation method from statistical physics and we show: (1) With probability $\to 1$ as the social group size $N \to \infty $, every individual in the social group learns the best option. (2) Over an arbitrary finite time horizon $[0, T]$, with high probability (in $N$), the fraction of individuals that prefer the best option grows to 1 exponentially fast as $t$ increases ($t\in [0, T]$). A major innovation of our mean-filed analysis is a simple yet powerful technique to deal with absorbing states in the interchange of limits $N \to \infty$ and $t \to \infty $. The mean-field approximation method allows us to approximate the probabilistic sample paths of our learning dynamics by a deterministic and smooth trajectory that corresponds to the unique solution of a well-behaved system of ordinary differential equations (ODEs). Such an approximation is desired because the analysis of a system of ODEs is relatively easier than that of the original stochastic system.

Viaarxiv icon

Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks

Aug 21, 2017
Nancy Lynch, Cameron Musco, Merav Parter

Figure 1 for Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks
Figure 2 for Neuro-RAM Unit with Applications to Similarity Testing and Compression in Spiking Neural Networks

We study distributed algorithms implemented in a simplified biologically inspired model for stochastic spiking neural networks. We focus on tradeoffs between computation time and network complexity, along with the role of randomness in efficient neural computation. It is widely accepted that neural computation is inherently stochastic. In recent work, we explored how this stochasticity could be leveraged to solve the `winner-take-all' leader election task. Here, we focus on using randomness in neural algorithms for similarity testing and compression. In the most basic setting, given two $n$-length patterns of firing neurons, we wish to distinguish if the patterns are equal or $\epsilon$-far from equal. Randomization allows us to solve this task with a very compact network, using $O \left (\frac{\sqrt{n}\log n}{\epsilon}\right)$ auxiliary neurons, which is sublinear in the input size. At the heart of our solution is the design of a $t$-round neural random access memory, or indexing network, which we call a neuro-RAM. This module can be implemented with $O(n/t)$ auxiliary neurons and is useful in many applications beyond similarity testing. Using a VC dimension-based argument, we show that the tradeoff between runtime and network size in our neuro-RAM is nearly optimal. Our result has several implications -- since our neuro-RAM can be implemented with deterministic threshold gates, it shows that, in contrast to similarity testing, randomness does not provide significant computational advantages for this problem. It also establishes a separation between feedforward networks whose gates spike with sigmoidal probability functions, and well-studied deterministic sigmoidal networks, whose gates output real number sigmoidal values, and which can implement a neuro-RAM much more efficiently.

Viaarxiv icon