Alert button
Picture for Feifei Zhao

Feifei Zhao

Alert button

Adaptive Reorganization of Neural Pathways for Continual Learning with Hybrid Spiking Neural Networks

Sep 18, 2023
Bing Han, Feifei Zhao, Wenxuan Pan, Zhaoya Zhao, Xianqi Li, Qingqun Kong, Yi Zeng

The human brain can self-organize rich and diverse sparse neural pathways to incrementally master hundreds of cognitive tasks. However, most existing continual learning algorithms for deep artificial and spiking neural networks are unable to adequately auto-regulate the limited resources in the network, which leads to performance drop along with energy consumption rise as the increase of tasks. In this paper, we propose a brain-inspired continual learning algorithm with adaptive reorganization of neural pathways, which employs Self-Organizing Regulation networks to reorganize the single and limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to efficiently cope with incremental tasks. The proposed model demonstrates consistent superiority in performance, energy consumption, and memory capacity on diverse continual learning tasks ranging from child-like simple to complex tasks, as well as on generalized CIFAR100 and ImageNet datasets. In particular, the SOR-SNN model excels at learning more complex tasks as well as more tasks, and is able to integrate the past learned knowledge with the information from the current task, showing the backward transfer ability to facilitate the old tasks. Meanwhile, the proposed model exhibits self-repairing ability to irreversible damage and for pruned networks, could automatically allocate new pathway from the retained network to recover memory for forgotten knowledge.

Viaarxiv icon

Brain-inspired Evolutionary Architectures for Spiking Neural Networks

Sep 11, 2023
Wenxuan Pan, Feifei Zhao, Zhuoya Zhao, Yi Zeng

Figure 1 for Brain-inspired Evolutionary Architectures for Spiking Neural Networks
Figure 2 for Brain-inspired Evolutionary Architectures for Spiking Neural Networks
Figure 3 for Brain-inspired Evolutionary Architectures for Spiking Neural Networks
Figure 4 for Brain-inspired Evolutionary Architectures for Spiking Neural Networks

The complex and unique neural network topology of the human brain formed through natural evolution enables it to perform multiple cognitive functions simultaneously. Automated evolutionary mechanisms of biological network structure inspire us to explore efficient architectural optimization for Spiking Neural Networks (SNNs). Instead of manually designed fixed architectures or hierarchical Network Architecture Search (NAS), this paper evolves SNNs architecture by incorporating brain-inspired local modular structure and global cross-module connectivity. Locally, the brain region-inspired module consists of multiple neural motifs with excitatory and inhibitory connections; Globally, we evolve free connections among modules, including long-term cross-module feedforward and feedback connections. We further introduce an efficient multi-objective evolutionary algorithm based on a few-shot performance predictor, endowing SNNs with high performance, efficiency and low energy consumption. Extensive experiments on static datasets (CIFAR10, CIFAR100) and neuromorphic datasets (CIFAR10-DVS, DVS128-Gesture) demonstrate that our proposed model boosts energy efficiency, archiving consistent and remarkable performance. This work explores brain-inspired neural architectures suitable for SNNs and also provides preliminary insights into the evolutionary mechanisms of biological neural networks in the human brain.

Viaarxiv icon

Metaplasticity: Unifying Learning and Homeostatic Plasticity in Spiking Neural Networks

Aug 23, 2023
Guobin Shen, Dongcheng Zhao, Yiting Dong, Yang Li, Feifei Zhao, Yi Zeng

The natural evolution of the human brain has given rise to multiple forms of synaptic plasticity, allowing for dynamic changes to adapt to an ever-evolving world. The evolutionary development of synaptic plasticity has spurred our exploration of biologically plausible optimization and learning algorithms for Spiking Neural Networks (SNNs). Present neural networks rely on the direct training of synaptic weights, which ultimately leads to fixed connections and hampers their ability to adapt to dynamic real-world environments. To address this challenge, we introduce the application of metaplasticity -- a sophisticated mechanism involving the learning of plasticity rules rather than direct modifications of synaptic weights. Metaplasticity dynamically combines different plasticity rules, effectively enhancing working memory, multitask generalization, and adaptability while uncovering potential associations between various forms of plasticity and cognitive functions. By integrating metaplasticity into SNNs, we demonstrate the enhanced adaptability and cognitive capabilities within artificial intelligence systems. This computational perspective unveils the learning mechanisms of the brain, marking a significant step in the profound intersection of neuroscience and artificial intelligence.

Viaarxiv icon

Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks

Aug 09, 2023
Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan, Guobin Shen

Figure 1 for Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks
Figure 2 for Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks
Figure 3 for Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks
Figure 4 for Enhancing Efficient Continual Learning with Dynamic Structure Development of Spiking Neural Networks

Children possess the ability to learn multiple cognitive tasks sequentially, which is a major challenge toward the long-term goal of artificial general intelligence. Existing continual learning frameworks are usually applicable to Deep Neural Networks (DNNs) and lack the exploration on more brain-inspired, energy-efficient Spiking Neural Networks (SNNs). Drawing on continual learning mechanisms during child growth and development, we propose Dynamic Structure Development of Spiking Neural Networks (DSD-SNN) for efficient and adaptive continual learning. When learning a sequence of tasks, the DSD-SNN dynamically assigns and grows new neurons to new tasks and prunes redundant neurons, thereby increasing memory capacity and reducing computational overhead. In addition, the overlapping shared structure helps to quickly leverage all acquired knowledge to new tasks, empowering a single network capable of supporting multiple incremental tasks (without the separate sub-network mask for each task). We validate the effectiveness of the proposed model on multiple class incremental learning and task incremental learning benchmarks. Extensive experiments demonstrated that our model could significantly improve performance, learning speed and memory capacity, and reduce computational overhead. Besides, our DSD-SNN model achieves comparable performance with the DNNs-based methods, and significantly outperforms the state-of-the-art (SOTA) performance for existing SNNs-based continual learning methods.

* IJCAI2023 Camera ready  
Viaarxiv icon

Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks

Apr 21, 2023
Wenxuan Pan, Feifei Zhao, Guobin Shen, Bing Han, Yi Zeng

Figure 1 for Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks
Figure 2 for Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks
Figure 3 for Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks
Figure 4 for Multi-scale Evolutionary Neural Architecture Search for Deep Spiking Neural Networks

Spiking Neural Networks (SNNs) have received considerable attention not only for their superiority in energy efficient with discrete signal processing, but also for their natural suitability to integrate multi-scale biological plasticity. However, most SNNs directly adopt the structure of the well-established DNN, rarely automatically design Neural Architecture Search (NAS) for SNNs. The neural motifs topology, modular regional structure and global cross-brain region connection of the human brain are the product of natural evolution and can serve as a perfect reference for designing brain-inspired SNN architecture. In this paper, we propose a Multi-Scale Evolutionary Neural Architecture Search (MSE-NAS) for SNN, simultaneously considering micro-, meso- and macro-scale brain topologies as the evolutionary search space. MSE-NAS evolves individual neuron operation, self-organized integration of multiple circuit motifs, and global connectivity across motifs through a brain-inspired indirect evaluation function, Representational Dissimilarity Matrices (RDMs). This training-free fitness function could greatly reduce computational consumption and NAS's time, and its task-independent property enables the searched SNNs to exhibit excellent transferbility and scalability. Extensive experiments demonstrate that the proposed algorithm achieves state-of-the-art (SOTA) performance with shorter simulation steps on static datasets (CIFAR10, CIFAR100) and neuromorphic datasets (CIFAR10-DVS and DVS128-Gesture). The thorough analysis also illustrates the significant performance improvement and consistent bio-interpretability deriving from the topological evolution at different scales and the RDMs fitness function.

Viaarxiv icon

Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks

Mar 31, 2023
Wenxuan Pan, Feifei Zhao, Yi Zeng, Bing Han

Figure 1 for Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
Figure 2 for Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
Figure 3 for Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks
Figure 4 for Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks

The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking Neural Network (SNN) based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.

Viaarxiv icon

Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning

Jan 18, 2023
Yinqian Sun, Yi Zeng, Feifei Zhao, Zhuoya Zhao

Figure 1 for Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning
Figure 2 for Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning
Figure 3 for Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning
Figure 4 for Multi-compartment Neuron and Population Encoding improved Spiking Neural Network for Deep Distributional Reinforcement Learning

Inspired by the information processing with binary spikes in the brain, the spiking neural networks (SNNs) exhibit significant low energy consumption and are more suitable for incorporating multi-scale biological characteristics. Spiking Neurons, as the basic information processing unit of SNNs, are often simplified in most SNNs which only consider LIF point neuron and do not take into account the multi-compartmental structural properties of biological neurons. This limits the computational and learning capabilities of SNNs. In this paper, we proposed a brain-inspired SNN-based deep distributional reinforcement learning algorithm with combination of bio-inspired multi-compartment neuron (MCN) model and population coding method. The proposed multi-compartment neuron built the structure and function of apical dendritic, basal dendritic, and somatic computing compartments to achieve the computational power close to that of biological neurons. Besides, we present an implicit fractional embedding method based on spiking neuron population encoding. We tested our model on Atari games, and the experiment results show that the performance of our model surpasses the vanilla ANN-based FQF model and ANN-SNN conversion method based Spiking-FQF models. The ablation experiments show that the proposed multi-compartment neural model and quantile fraction implicit population spike representation play an important role in realizing SNN-based deep distributional reinforcement learning.

Viaarxiv icon

A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering

Jan 07, 2023
Yao Liang, Hongjian Fang, Yi Zeng, Feifei Zhao

Figure 1 for A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering
Figure 2 for A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering
Figure 3 for A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering
Figure 4 for A Brain-inspired Memory Transformation based Differentiable Neural Computer for Reasoning-based Question Answering

Reasoning and question answering as a basic cognitive function for humans, is nevertheless a great challenge for current artificial intelligence. Although the Differentiable Neural Computer (DNC) model could solve such problems to a certain extent, the development is still limited by its high algorithm complexity, slow convergence speed, and poor test robustness. Inspired by the learning and memory mechanism of the brain, this paper proposed a Memory Transformation based Differentiable Neural Computer (MT-DNC) model. MT-DNC incorporates working memory and long-term memory into DNC, and realizes the autonomous transformation of acquired experience between working memory and long-term memory, thereby helping to effectively extract acquired knowledge to improve reasoning ability. Experimental results on bAbI question answering task demonstrated that our proposed method achieves superior performance and faster convergence speed compared to other existing DNN and DNC models. Ablation studies also indicated that the memory transformation from working memory to long-term memory plays essential role in improving the robustness and stability of reasoning. This work explores how brain-inspired memory transformation can be integrated and applied to complex intelligent dialogue and reasoning systems.

Viaarxiv icon

Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks

Nov 23, 2022
Bing Han, Feifei Zhao, Yi Zeng, Guobin Shen

Figure 1 for Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks
Figure 2 for Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks
Figure 3 for Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks
Figure 4 for Developmental Plasticity-inspired Adaptive Pruning for Deep Spiking and Artificial Neural Networks

Developmental plasticity plays a vital role in shaping the brain's structure during ongoing learning in response to the dynamically changing environments. However, the existing network compression methods for deep artificial neural networks (ANNs) and spiking neural networks (SNNs) draw little inspiration from the brain's developmental plasticity mechanisms, thus limiting their ability to learn efficiently, rapidly, and accurately. This paper proposed a developmental plasticity-inspired adaptive pruning (DPAP) method, with inspiration from the adaptive developmental pruning of dendritic spines, synapses, and neurons according to the "use it or lose it, gradually decay" principle. The proposed DPAP model considers multiple biologically realistic mechanisms (such as dendritic spine dynamic plasticity, activity-dependent neural spiking trace, local synaptic plasticity), with the addition of an adaptive pruning strategy, so that the network structure can be dynamically optimized during learning without any pre-training and retraining. We demonstrated that the proposed DPAP method applied to deep ANNs and SNNs could learn efficient network architectures that retain only relevant important connections and neurons. Extensive comparative experiments show consistent and remarkable performance and speed boost with the extremely compressed networks on a diverse set of benchmark tasks, especially neuromorphic datasets for SNNs. This work explores how developmental plasticity enables the complex deep networks to gradually evolve into brain-like efficient and compact structures, eventually achieving state-of-the-art (SOTA) performance for biologically realistic SNNs.

Viaarxiv icon

Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks

Nov 22, 2022
Bing Han, Feifei Zhao, Yi Zeng, Wenxuan Pan

Figure 1 for Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks
Figure 2 for Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks
Figure 3 for Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks
Figure 4 for Adaptive Sparse Structure Development with Pruning and Regeneration for Spiking Neural Networks

Spiking Neural Networks (SNNs) are more biologically plausible and computationally efficient. Therefore, SNNs have the natural advantage of drawing the sparse structural plasticity of brain development to alleviate the energy problems of deep neural networks caused by their complex and fixed structures. However, previous SNNs compression works are lack of in-depth inspiration from the brain development plasticity mechanism. This paper proposed a novel method for the adaptive structural development of SNN (SD-SNN), introducing dendritic spine plasticity-based synaptic constraint, neuronal pruning and synaptic regeneration. We found that synaptic constraint and neuronal pruning can detect and remove a large amount of redundancy in SNNs, coupled with synaptic regeneration can effectively prevent and repair over-pruning. Moreover, inspired by the neurotrophic hypothesis, neuronal pruning rate and synaptic regeneration rate were adaptively adjusted during the learning-while-pruning process, which eventually led to the structural stability of SNNs. Experimental results on spatial (MNIST, CIFAR-10) and temporal neuromorphic (N-MNIST, DVS-Gesture) datasets demonstrate that our method can flexibly learn appropriate compression rate for various tasks and effectively achieve superior performance while massively reducing the network energy consumption. Specifically, for the spatial MNIST dataset, our SD-SNN achieves 99.51\% accuracy at the pruning rate 49.83\%, which has a 0.05\% accuracy improvement compared to the baseline without compression. For the neuromorphic DVS-Gesture dataset, 98.20\% accuracy with 1.09\% improvement is achieved by our method when the compression rate reaches 55.50\%.

Viaarxiv icon