Alert button
Picture for Mo Zhang

Mo Zhang

Alert button

BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation

Apr 08, 2021
Mo Zhang, Fei Yu, Jie Zhao, Li Zhang, Quanzheng Li

Figure 1 for BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation
Figure 2 for BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation
Figure 3 for BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation
Figure 4 for BEFD: Boundary Enhancement and Feature Denoising for Vessel Segmentation

Blood vessel segmentation is crucial for many diagnostic and research applications. In recent years, CNN-based models have leaded to breakthroughs in the task of segmentation, however, such methods usually lose high-frequency information like object boundaries and subtle structures, which are vital to vessel segmentation. To tackle this issue, we propose Boundary Enhancement and Feature Denoising (BEFD) module to facilitate the network ability of extracting boundary information in semantic segmentation, which can be integrated into arbitrary encoder-decoder architecture in an end-to-end way. By introducing Sobel edge detector, the network is able to acquire additional edge prior, thus enhancing boundary in an unsupervised manner for medical image segmentation. In addition, we also utilize a denoising block to reduce the noise hidden in the low-level features. Experimental results on retinal vessel dataset and angiocarpy dataset demonstrate the superior performance of the new BEFD module.

* MICCAI 2020 
Viaarxiv icon

MS-GWNN:multi-scale graph wavelet neural network for breast cancer diagnosis

Dec 29, 2020
Mo Zhang, Quanzheng Li

Figure 1 for MS-GWNN:multi-scale graph wavelet neural network for breast cancer diagnosis
Figure 2 for MS-GWNN:multi-scale graph wavelet neural network for breast cancer diagnosis
Figure 3 for MS-GWNN:multi-scale graph wavelet neural network for breast cancer diagnosis
Figure 4 for MS-GWNN:multi-scale graph wavelet neural network for breast cancer diagnosis

Breast cancer is one of the most common cancers in women worldwide, and early detection can significantly reduce the mortality rate of breast cancer. It is crucial to take multi-scale information of tissue structure into account in the detection of breast cancer. And thus, it is the key to design an accurate computer-aided detection (CAD) system to capture multi-scale contextual features in a cancerous tissue. In this work, we present a novel graph convolutional neural network for histopathological image classification of breast cancer. The new method, named multi-scale graph wavelet neural network (MS-GWNN), leverages the localization property of spectral graph wavelet to perform multi-scale analysis. By aggregating features at different scales, MS-GWNN can encode the multi-scale contextual interactions in the whole pathological slide. Experimental results on two public datasets demonstrate the superiority of the proposed method. Moreover, through ablation studies, we find that multi-scale analysis has a significant impact on the accuracy of cancer diagnosis.

Viaarxiv icon

PGU-net+: Progressive Growing of U-net+ for Automated Cervical Nuclei Segmentation

Nov 12, 2019
Jie Zhao, Lei Dai, Mo Zhang, Fei Yu, Meng Li, Hongfeng Li, Wenjia Wang, Li Zhang

Figure 1 for PGU-net+: Progressive Growing of U-net+ for Automated Cervical Nuclei Segmentation
Figure 2 for PGU-net+: Progressive Growing of U-net+ for Automated Cervical Nuclei Segmentation
Figure 3 for PGU-net+: Progressive Growing of U-net+ for Automated Cervical Nuclei Segmentation
Figure 4 for PGU-net+: Progressive Growing of U-net+ for Automated Cervical Nuclei Segmentation

Automated cervical nucleus segmentation based on deep learning can effectively improve the quantitative analysis of cervical cancer. However, accurate nuclei segmentation is still challenging. The classic U-net has not achieved satisfactory results on this task, because it mixes the information of different scales that affect each other, which limits the segmentation accuracy of the model. To solve this problem, we propose a progressive growing U-net (PGU-net+) model, which uses two paradigms to extract image features at different scales in a more independent way. First, we add residual modules between different scales of U-net, which enforces the model to learn the approximate shape of the annotation in the coarser scale, and to learn the residual between the annotation and the approximate shape in the finer scale. Second, we start to train the model with the coarsest part and then progressively add finer part to the training until the full model is included. When we train a finer part, we will reduce the learning rate of the previous coarser part, which further ensures that the model independently extracts information from different scales. We conduct several comparative experiments on the Herlev dataset. The experimental results show that the PGU-net+ has superior accuracy than the previous state-of-the-art methods on cervical nuclei segmentation.

* MICCAI workshop MMMI2019 Best Student Paper Award 
Viaarxiv icon

Multi-label Detection and Classification of Red Blood Cells in Microscopic Images

Oct 07, 2019
Wei Qiu, Jiaming Guo, Xiang Li, Mengjia Xu, Mo Zhang, Ning Guo, Quanzheng Li

Figure 1 for Multi-label Detection and Classification of Red Blood Cells in Microscopic Images
Figure 2 for Multi-label Detection and Classification of Red Blood Cells in Microscopic Images
Figure 3 for Multi-label Detection and Classification of Red Blood Cells in Microscopic Images
Figure 4 for Multi-label Detection and Classification of Red Blood Cells in Microscopic Images

Cell detection and cell type classification from biomedical images play an important role for high-throughput imaging and various clinical application. While classification of single cell sample can be performed with standard computer vision and machine learning methods, analysis of multi-label samples (region containing congregating cells) is more challenging, as separation of individual cells can be difficult (e.g. touching cells) or even impossible (e.g. overlapping cells). As multi-instance images are common in analyzing Red Blood Cell (RBC) for Sickle Cell Disease (SCD) diagnosis, we develop and implement a multi-instance cell detection and classification framework to address this challenge. The framework firstly trains a region proposal model based on Region-based Convolutional Network (RCNN) to obtain bounding-boxes of regions potentially containing single or multiple cells from input microscopic images, which are extracted as image patches. High-level image features are then calculated from image patches through a pre-trained Convolutional Neural Network (CNN) with ResNet-50 structure. Using these image features inputs, six networks are then trained to make multi-label prediction of whether a given patch contains cells belonging to a specific cell type. As the six networks are trained with image patches consisting of both individual cells and touching/overlapping cells, they can effectively recognize cell types that are presented in multi-instance image samples. Finally, for the purpose of SCD testing, we train another machine learning classifier to predict whether the given image patch contains abnormal cell type based on outputs from the six networks. Testing result of the proposed framework shows that it can achieve good performance in automatic cell detection and classification.

* Wei Qiu, Jiaming Guo and Xiang Li contributed equally 
Viaarxiv icon

ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning

Jul 07, 2019
Mo Zhang, Jie Zhao, Xiang Li, Li Zhang, Quanzheng Li

Figure 1 for ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning
Figure 2 for ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning
Figure 3 for ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning
Figure 4 for ASCNet: Adaptive-Scale Convolutional Neural Networks for Multi-Scale Feature Learning

Extracting multi-scale information is key to semantic segmentation. However, the classic convolutional neural networks (CNNs) encounter difficulties in achieving multi-scale information extraction: expanding convolutional kernel incurs the high computational cost and using maximum pooling sacrifices image information. The recently developed dilated convolution solves these problems, but with the limitation that the dilation rates are fixed and therefore the receptive field cannot fit for all objects with different sizes in the image. We propose an adaptivescale convolutional neural network (ASCNet), which introduces a 3-layer convolution structure in the end-to-end training, to adaptively learn an appropriate dilation rate for each pixel in the image. Such pixel-level dilation rates produce optimal receptive fields so that the information of objects with different sizes can be extracted at the corresponding scale. We compare the segmentation results using the classic CNN, the dilated CNN and the proposed ASCNet on two types of medical images (The Herlev dataset and SCD RBC dataset). The experimental results show that ASCNet achieves the highest accuracy. Moreover, the automatically generated dilation rates are positively correlated to the sizes of the objects, confirming the effectiveness of the proposed method.

Viaarxiv icon

Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)

Aug 06, 2018
Yu Zhao, Xiang Li, Wei Zhang, Shijie Zhao, Milad Makkie, Mo Zhang, Quanzheng Li, Tianming Liu

Figure 1 for Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)
Figure 2 for Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)
Figure 3 for Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)
Figure 4 for Modeling 4D fMRI Data via Spatio-Temporal Convolutional Neural Networks (ST-CNN)

Simultaneous modeling of the spatio-temporal variation patterns of brain functional network from 4D fMRI data has been an important yet challenging problem for the field of cognitive neuroscience and medical image analysis. Inspired by the recent success in applying deep learning for functional brain decoding and encoding, in this work we propose a spatio-temporal convolutional neural network (ST-CNN)to jointly learn the spatial and temporal patterns of targeted network from the training data and perform automatic, pin-pointing functional network identification. The proposed ST-CNN is evaluated by the task of identifying the Default Mode Network (DMN) from fMRI data. Results show that while the framework is only trained on one fMRI dataset,it has the sufficient generalizability to identify the DMN from different populations of data as well as different cognitive tasks. Further investigation into the results show that the superior performance of ST-CNN is driven by the jointly-learning scheme, which capture the intrinsic relationship between the spatial and temporal characteristic of DMN and ensures the accurate identification.

* Yu Zhao and Xiang Li contribute equally to this work 
Viaarxiv icon

Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net

Oct 29, 2017
Mo Zhang, Xiang Li, Mengjia Xu, Quanzheng Li

Figure 1 for Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net
Figure 2 for Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net
Figure 3 for Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net
Figure 4 for Image Segmentation and Classification for Sickle Cell Disease using Deformable U-Net

Reliable cell segmentation and classification from biomedical images is a crucial step for both scientific research and clinical practice. A major challenge for more robust segmentation and classification methods is the large variations in the size, shape and viewpoint of the cells, combining with the low image quality caused by noise and artifacts. To address this issue, in this work we propose a learning-based, simultaneous cell segmentation and classification method based on the deep U-Net structure with deformable convolution layers. The U-Net architecture for deep learning has been shown to offer a precise localization for image semantic segmentation. Moreover, deformable convolution layer enables the free form deformation of the feature learning process, thus makes the whole network more robust to various cell morphologies and image settings. The proposed method is tested on microscopic red blood cell images from patients with sickle cell disease. The results show that U-Net with deformable convolution achieves the highest accuracy for segmentation and classification, comparing with original U-Net structure.

Viaarxiv icon