Alert button
Picture for L. F. Abbott

L. F. Abbott

Alert button

Theory of coupled neuronal-synaptic dynamics

Add code
Bookmark button
Alert button
Feb 17, 2023
David G. Clark, L. F. Abbott

Figure 1 for Theory of coupled neuronal-synaptic dynamics
Figure 2 for Theory of coupled neuronal-synaptic dynamics
Figure 3 for Theory of coupled neuronal-synaptic dynamics
Figure 4 for Theory of coupled neuronal-synaptic dynamics
Viaarxiv icon

Dimension of Activity in Random Neural Networks

Add code
Bookmark button
Alert button
Aug 07, 2022
David G. Clark, L. F. Abbott, Ashok Litwin-Kumar

Figure 1 for Dimension of Activity in Random Neural Networks
Figure 2 for Dimension of Activity in Random Neural Networks
Figure 3 for Dimension of Activity in Random Neural Networks
Figure 4 for Dimension of Activity in Random Neural Networks
Viaarxiv icon

The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks

Add code
Bookmark button
Alert button
Feb 05, 2022
Samuel Lippl, L. F. Abbott, SueYeon Chung

Figure 1 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 2 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 3 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Figure 4 for The Implicit Bias of Gradient Descent on Generalized Gated Linear Networks
Viaarxiv icon

Input correlations impede suppression of chaos and learning in balanced rate networks

Add code
Bookmark button
Alert button
Jan 24, 2022
Rainer Engelken, Alessandro Ingrosso, Ramin Khajeh, Sven Goedeke, L. F. Abbott

Figure 1 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 2 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 3 for Input correlations impede suppression of chaos and learning in balanced rate networks
Figure 4 for Input correlations impede suppression of chaos and learning in balanced rate networks
Viaarxiv icon

Credit Assignment Through Broadcasting a Global Error Vector

Add code
Bookmark button
Alert button
Jun 08, 2021
David G. Clark, L. F. Abbott, SueYeon Chung

Figure 1 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 2 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 3 for Credit Assignment Through Broadcasting a Global Error Vector
Figure 4 for Credit Assignment Through Broadcasting a Global Error Vector
Viaarxiv icon

Neural population geometry: An approach for understanding biological and artificial neural networks

Add code
Bookmark button
Alert button
Apr 17, 2021
SueYeon Chung, L. F. Abbott

Figure 1 for Neural population geometry: An approach for understanding biological and artificial neural networks
Figure 2 for Neural population geometry: An approach for understanding biological and artificial neural networks
Viaarxiv icon

Training dynamically balanced excitatory-inhibitory networks

Add code
Bookmark button
Alert button
Dec 29, 2018
Alessandro Ingrosso, L. F. Abbott

Figure 1 for Training dynamically balanced excitatory-inhibitory networks
Figure 2 for Training dynamically balanced excitatory-inhibitory networks
Figure 3 for Training dynamically balanced excitatory-inhibitory networks
Figure 4 for Training dynamically balanced excitatory-inhibitory networks
Viaarxiv icon

Feedback alignment in deep convolutional networks

Add code
Bookmark button
Alert button
Dec 12, 2018
Theodore H. Moskovitz, Ashok Litwin-Kumar, L. F. Abbott

Figure 1 for Feedback alignment in deep convolutional networks
Figure 2 for Feedback alignment in deep convolutional networks
Figure 3 for Feedback alignment in deep convolutional networks
Figure 4 for Feedback alignment in deep convolutional networks
Viaarxiv icon

full-FORCE: A Target-Based Method for Training Recurrent Networks

Add code
Bookmark button
Alert button
Oct 09, 2017
Brian DePasquale, Christopher J. Cueva, Kanaka Rajan, G. Sean Escola, L. F. Abbott

Figure 1 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 2 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 3 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Figure 4 for full-FORCE: A Target-Based Method for Training Recurrent Networks
Viaarxiv icon