Alert button
Picture for Lei Deng

Lei Deng

Alert button

High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models

Sep 27, 2023
Selim F. Yilmaz, Xueyan Niu, Bo Bai, Wei Han, Lei Deng, Deniz Gunduz

Figure 1 for High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models
Figure 2 for High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models
Figure 3 for High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models
Figure 4 for High Perceptual Quality Wireless Image Delivery with Denoising Diffusion Models

We consider the image transmission problem over a noisy wireless channel via deep learning-based joint source-channel coding (DeepJSCC) along with a denoising diffusion probabilistic model (DDPM) at the receiver. Specifically, we are interested in the perception-distortion trade-off in the practical finite block length regime, in which separate source and channel coding can be highly suboptimal. We introduce a novel scheme that utilizes the range-null space decomposition of the target image. We transmit the range-space of the image after encoding and employ DDPM to progressively refine its null space contents. Through extensive experiments, we demonstrate significant improvements in distortion and perceptual quality of reconstructed images compared to standard DeepJSCC and the state-of-the-art generative learning-based method. We will publicly share our source code to facilitate further research and reproducibility.

* 6 pages, 4 figures 
Viaarxiv icon

Attention Spiking Neural Networks

Sep 28, 2022
Man Yao, Guangshe Zhao, Hengyu Zhang, Yifan Hu, Lei Deng, Yonghong Tian, Bo Xu, Guoqi Li

Figure 1 for Attention Spiking Neural Networks
Figure 2 for Attention Spiking Neural Networks
Figure 3 for Attention Spiking Neural Networks
Figure 4 for Attention Spiking Neural Networks

Benefiting from the event-driven and sparse spiking characteristics of the brain, spiking neural networks (SNNs) are becoming an energy-efficient alternative to artificial neural networks (ANNs). However, the performance gap between SNNs and ANNs has been a great hindrance to deploying SNNs ubiquitously for a long time. To leverage the full potential of SNNs, we study the effect of attention mechanisms in SNNs. We first present our idea of attention with a plug-and-play kit, termed the Multi-dimensional Attention (MA). Then, a new attention SNN architecture with end-to-end training called "MA-SNN" is proposed, which infers attention weights along the temporal, channel, as well as spatial dimensions separately or simultaneously. Based on the existing neuroscience theories, we exploit the attention weights to optimize membrane potentials, which in turn regulate the spiking response in a data-dependent way. At the cost of negligible additional parameters, MA facilitates vanilla SNNs to achieve sparser spiking activity, better performance, and energy efficiency concurrently. Experiments are conducted in event-based DVS128 Gesture/Gait action recognition and ImageNet-1k image classification. On Gesture/Gait, the spike counts are reduced by 84.9%/81.6%, and the task accuracy and energy efficiency are improved by 5.9%/4.7% and 3.4$\times$/3.2$\times$. On ImageNet-1K, we achieve top-1 accuracy of 75.92% and 77.08% on single/4-step Res-SNN-104, which are state-of-the-art results in SNNs. To our best knowledge, this is for the first time, that the SNN community achieves comparable or even better performance compared with its ANN counterpart in the large-scale dataset. Our work lights up SNN's potential as a general backbone to support various applications for SNNs, with a great balance between effectiveness and efficiency.

* 18 pages, 8 figures, Under Review 
Viaarxiv icon

Contrastive learning-based computational histopathology predict differential expression of cancer driver genes

Apr 27, 2022
Haojie Huang, Gongming Zhou, Xuejun Liu, Lei Deng, Chen Wu, Dachuan Zhang, Hui Liu

Figure 1 for Contrastive learning-based computational histopathology predict differential expression of cancer driver genes
Figure 2 for Contrastive learning-based computational histopathology predict differential expression of cancer driver genes
Figure 3 for Contrastive learning-based computational histopathology predict differential expression of cancer driver genes
Figure 4 for Contrastive learning-based computational histopathology predict differential expression of cancer driver genes

Digital pathological analysis is run as the main examination used for cancer diagnosis. Recently, deep learning-driven feature extraction from pathology images is able to detect genetic variations and tumor environment, but few studies focus on differential gene expression in tumor cells. In this paper, we propose a self-supervised contrastive learning framework, HistCode, to infer differential gene expressions from whole slide images (WSIs). We leveraged contrastive learning on large-scale unannotated WSIs to derive slide-level histopathological feature in latent space, and then transfer it to tumor diagnosis and prediction of differentially expressed cancer driver genes. Our extensive experiments showed that our method outperformed other state-of-the-art models in tumor diagnosis tasks, and also effectively predicted differential gene expressions. Interestingly, we found the higher fold-changed genes can be more precisely predicted. To intuitively illustrate the ability to extract informative features from pathological images, we spatially visualized the WSIs colored by the attentive scores of image tiles. We found that the tumor and necrosis areas were highly consistent with the annotations of experienced pathologists. Moreover, the spatial heatmap generated by lymphocyte-specific gene expression patterns was also consistent with the manually labeled WSI.

Viaarxiv icon

Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions

Mar 14, 2022
Arindam Basu, Charlotte Frenkel, Lei Deng, Xueyong Zhang

Figure 1 for Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions
Figure 2 for Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions
Figure 3 for Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions
Figure 4 for Spiking Neural Network Integrated Circuits: A Review of Trends and Future Directions

In this paper, we reviewed Spiking neural network (SNN) integrated circuit designs and analyzed the trends among mixed-signal cores, fully digital cores and large-scale, multi-core designs. Recently reported SNN integrated circuits are compared under three broad categories: (a) Large-scale multi-core designs that have dedicated NOC for spike routing, (b) digital single-core designs and (c) mixed-signal single-core designs. Finally, we finish the paper with some directions for future progress.

Viaarxiv icon

A Deep Learning Approach to Predicting Ventilator Parameters for Mechanically Ventilated Septic Patients

Feb 21, 2022
Zhijun Zeng, Zhen Hou, Ting Li, Lei Deng, Jianguo Hou, Xinran Huang, Jun Li, Meirou Sun, Yunhan Wang, Qiyu Wu, Wenhao Zheng, Hua Jiang, Qi Wang

Figure 1 for A Deep Learning Approach to Predicting Ventilator Parameters for Mechanically Ventilated Septic Patients
Figure 2 for A Deep Learning Approach to Predicting Ventilator Parameters for Mechanically Ventilated Septic Patients
Figure 3 for A Deep Learning Approach to Predicting Ventilator Parameters for Mechanically Ventilated Septic Patients
Figure 4 for A Deep Learning Approach to Predicting Ventilator Parameters for Mechanically Ventilated Septic Patients

We develop a deep learning approach to predicting a set of ventilator parameters for a mechanically ventilated septic patient using a long and short term memory (LSTM) recurrent neural network (RNN) model. We focus on short-term predictions of a set of ventilator parameters for the septic patient in emergency intensive care unit (EICU). The short-term predictability of the model provides attending physicians with early warnings to make timely adjustment to the treatment of the patient in the EICU. The patient specific deep learning model can be trained on any given critically ill patient, making it an intelligent aide for physicians to use in emergent medical situations.

Viaarxiv icon

Survey on Graph Neural Network Acceleration: An Algorithmic Perspective

Feb 10, 2022
Xin Liu, Mingyu Yan, Lei Deng, Guoqi Li, Xiaochun Ye, Dongrui Fan, Shirui Pan, Yuan Xie

Figure 1 for Survey on Graph Neural Network Acceleration: An Algorithmic Perspective
Figure 2 for Survey on Graph Neural Network Acceleration: An Algorithmic Perspective
Figure 3 for Survey on Graph Neural Network Acceleration: An Algorithmic Perspective
Figure 4 for Survey on Graph Neural Network Acceleration: An Algorithmic Perspective

Graph neural networks (GNNs) have been a hot spot of recent research and are widely utilized in diverse applications. However, with the use of huger data and deeper models, an urgent demand is unsurprisingly made to accelerate GNNs for more efficient execution. In this paper, we provide a comprehensive survey on acceleration methods for GNNs from an algorithmic perspective. We first present a new taxonomy to classify existing acceleration methods into five categories. Based on the classification, we systematically discuss these methods and highlight their correlations. Next, we provide comparisons from aspects of the efficiency and characteristics of these methods. Finally, we suggest some promising prospects for future research.

* 8 pages 
Viaarxiv icon

Advancing Residual Learning towards Powerful Deep Spiking Neural Networks

Dec 23, 2021
Yifan Hu, Yujie Wu, Lei Deng, Guoqi Li

Figure 1 for Advancing Residual Learning towards Powerful Deep Spiking Neural Networks
Figure 2 for Advancing Residual Learning towards Powerful Deep Spiking Neural Networks
Figure 3 for Advancing Residual Learning towards Powerful Deep Spiking Neural Networks
Figure 4 for Advancing Residual Learning towards Powerful Deep Spiking Neural Networks

Despite the rapid progress of neuromorphic computing, inadequate capacity and insufficient representation power of spiking neural networks (SNNs) severely restrict their application scope in practice. Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks, but rarely did previous work assess their applicability to the characteristics of spike-based communication and spatiotemporal dynamics. In this paper, we first identify that this negligence leads to impeded information flow and accompanying degradation problem in previous residual SNNs. Then we propose a novel SNN-oriented residual block, MS-ResNet, which is able to significantly extend the depth of directly trained SNNs, e.g. up to 482 layers on CIFAR-10 and 104 layers on ImageNet, without observing any slight degradation problem. We validate the effectiveness of MS-ResNet on both frame-based and neuromorphic datasets, and MS-ResNet104 achieves a superior result of 76.02% accuracy on ImageNet, the first time in the domain of directly trained SNNs. Great energy efficiency is also observed that on average only one spike per neuron is needed to classify an input sample. We believe our powerful and scalable models will provide a strong support for further exploration of SNNs.

Viaarxiv icon

ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks

Oct 23, 2021
Yihan Lin, Wei Ding, Shaohua Qiang, Lei Deng, Guoqi Li

Figure 1 for ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks
Figure 2 for ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks
Figure 3 for ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks
Figure 4 for ES-ImageNet: A Million Event-Stream Classification Dataset for Spiking Neural Networks

With event-driven algorithms, especially the spiking neural networks (SNNs), achieving continuous improvement in neuromorphic vision processing, a more challenging event-stream-dataset is urgently needed. However, it is well known that creating an ES-dataset is a time-consuming and costly task with neuromorphic cameras like dynamic vision sensors (DVS). In this work, we propose a fast and effective algorithm termed Omnidirectional Discrete Gradient (ODG) to convert the popular computer vision dataset ILSVRC2012 into its event-stream (ES) version, generating about 1,300,000 frame-based images into ES-samples in 1000 categories. In this way, we propose an ES-dataset called ES-ImageNet, which is dozens of times larger than other neuromorphic classification datasets at present and completely generated by the software. The ODG algorithm implements an image motion to generate local value changes with discrete gradient information in different directions, providing a low-cost and high-speed way for converting frame-based images into event streams, along with Edge-Integral to reconstruct the high-quality images from event streams. Furthermore, we analyze the statistics of the ES-ImageNet in multiple ways, and a performance benchmark of the dataset is also provided using both famous deep neural network algorithms and spiking neural network algorithms. We believe that this work shall provide a new large-scale benchmark dataset for SNNs and neuromorphic vision.

Viaarxiv icon

Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations

Aug 14, 2021
Lei Deng, Yibiao Huang, Xuejun Liu, Hui Liu

Figure 1 for Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations
Figure 2 for Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations
Figure 3 for Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations
Figure 4 for Graph2MDA: a multi-modal variational graph embedding model for predicting microbe-drug associations

Accumulated clinical studies show that microbes living in humans interact closely with human hosts, and get involved in modulating drug efficacy and drug toxicity. Microbes have become novel targets for the development of antibacterial agents. Therefore, screening of microbe-drug associations can benefit greatly drug research and development. With the increase of microbial genomic and pharmacological datasets, we are greatly motivated to develop an effective computational method to identify new microbe-drug associations. In this paper, we proposed a novel method, Graph2MDA, to predict microbe-drug associations by using variational graph autoencoder (VGAE). We constructed multi-modal attributed graphs based on multiple features of microbes and drugs, such as molecular structures, microbe genetic sequences, and function annotations. Taking as input the multi-modal attribute graphs, VGAE was trained to learn the informative and interpretable latent representations of each node and the whole graph, and then a deep neural network classifier was used to predict microbe-drug associations. The hyperparameter analysis and model ablation studies showed the sensitivity and robustness of our model. We evaluated our method on three independent datasets and the experimental results showed that our proposed method outperformed six existing state-of-the-art methods. We also explored the meaningness of the learned latent representations of drugs and found that the drugs show obvious clustering patterns that are significantly consistent with drug ATC classification. Moreover, we conducted case studies on two microbes and two drugs and found 75\%-95\% predicted associations have been reported in PubMed literature. Our extensive performance evaluations validated the effectiveness of our proposed method.\

Viaarxiv icon

H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Jul 25, 2021
Ling Liang, Zheng Qu, Zhaodong Chen, Fengbin Tu, Yujie Wu, Lei Deng, Guoqi Li, Peng Li, Yuan Xie

Figure 1 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 2 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 3 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks
Figure 4 for H2Learn: High-Efficiency Learning Accelerator for High-Accuracy Spiking Neural Networks

Although spiking neural networks (SNNs) take benefits from the bio-plausible neural modeling, the low accuracy under the common local synaptic plasticity learning rules limits their application in many practical tasks. Recently, an emerging SNN supervised learning algorithm inspired by backpropagation through time (BPTT) from the domain of artificial neural networks (ANNs) has successfully boosted the accuracy of SNNs and helped improve the practicability of SNNs. However, current general-purpose processors suffer from low efficiency when performing BPTT for SNNs due to the ANN-tailored optimization. On the other hand, current neuromorphic chips cannot support BPTT because they mainly adopt local synaptic plasticity rules for simplified implementation. In this work, we propose H2Learn, a novel architecture that can achieve high efficiency for BPTT-based SNN learning which ensures high accuracy of SNNs. At the beginning, we characterized the behaviors of BPTT-based SNN learning. Benefited from the binary spike-based computation in the forward pass and the weight update, we first design lookup table (LUT) based processing elements in Forward Engine and Weight Update Engine to make accumulations implicit and to fuse the computations of multiple input points. Second, benefited from the rich sparsity in the backward pass, we design a dual-sparsity-aware Backward Engine which exploits both input and output sparsity. Finally, we apply a pipeline optimization between different engines to build an end-to-end solution for the BPTT-based SNN learning. Compared with the modern NVIDIA V100 GPU, H2Learn achieves 7.38x area saving, 5.74-10.20x speedup, and 5.25-7.12x energy saving on several benchmark datasets.

Viaarxiv icon