Alert button
Picture for Shilei Cao

Shilei Cao

Alert button

Learning Shape Priors by Pairwise Comparison for Robust Semantic Segmentation

Apr 23, 2022
Cong Xie, Hualuo Liu, Shilei Cao, Dong Wei, Kai Ma, Liansheng Wang, Yefeng Zheng

Figure 1 for Learning Shape Priors by Pairwise Comparison for Robust Semantic Segmentation
Figure 2 for Learning Shape Priors by Pairwise Comparison for Robust Semantic Segmentation

Semantic segmentation is important in medical image analysis. Inspired by the strong ability of traditional image analysis techniques in capturing shape priors and inter-subject similarity, many deep learning (DL) models have been recently proposed to exploit such prior information and achieved robust performance. However, these two types of important prior information are usually studied separately in existing models. In this paper, we propose a novel DL model to model both type of priors within a single framework. Specifically, we introduce an extra encoder into the classic encoder-decoder structure to form a Siamese structure for the encoders, where one of them takes a target image as input (the image-encoder), and the other concatenates a template image and its foreground regions as input (the template-encoder). The template-encoder encodes the shape priors and appearance characteristics of each foreground class in the template image. A cosine similarity based attention module is proposed to fuse the information from both encoders, to utilize both types of prior information encoded by the template-encoder and model the inter-subject similarity for each foreground class. Extensive experiments on two public datasets demonstrate that our proposed method can produce superior performance to competing methods.

* IEEE ISBI 2021 
Viaarxiv icon

Conquering Data Variations in Resolution: A Slice-Aware Multi-Branch Decoder Network

Mar 07, 2022
Shuxin Wang, Shilei Cao, Zhizhong Chai, Dong Wei, Kai Ma, Liansheng Wang, Yefeng Zheng

Figure 1 for Conquering Data Variations in Resolution: A Slice-Aware Multi-Branch Decoder Network
Figure 2 for Conquering Data Variations in Resolution: A Slice-Aware Multi-Branch Decoder Network
Figure 3 for Conquering Data Variations in Resolution: A Slice-Aware Multi-Branch Decoder Network
Figure 4 for Conquering Data Variations in Resolution: A Slice-Aware Multi-Branch Decoder Network

Fully convolutional neural networks have made promising progress in joint liver and liver tumor segmentation. Instead of following the debates over 2D versus 3D networks (for example, pursuing the balance between large-scale 2D pretraining and 3D context), in this paper, we novelly identify the wide variation in the ratio between intra- and inter-slice resolutions as a crucial obstacle to the performance. To tackle the mismatch between the intra- and inter-slice information, we propose a slice-aware 2.5D network that emphasizes extracting discriminative features utilizing not only in-plane semantics but also out-of-plane coherence for each separate slice. Specifically, we present a slice-wise multi-input multi-output architecture to instantiate such a design paradigm, which contains a Multi-Branch Decoder (MD) with a Slice-centric Attention Block (SAB) for learning slice-specific features and a Densely Connected Dice (DCD) loss to regularize the inter-slice predictions to be coherent and continuous. Based on the aforementioned innovations, we achieve state-of-the-art results on the MICCAI 2017 Liver Tumor Segmentation (LiTS) dataset. Besides, we also test our model on the ISBI 2019 Segmentation of THoracic Organs at Risk (SegTHOR) dataset, and the result proves the robustness and generalizability of the proposed method in other segmentation tasks.

* Published by IEEE TMI 
Viaarxiv icon

RECIST-Net: Lesion detection via grouping keypoints on RECIST-based annotation

Jul 19, 2021
Cong Xie, Shilei Cao, Dong Wei, Hongyu Zhou, Kai Ma, Xianli Zhang, Buyue Qian, Liansheng Wang, Yefeng Zheng

Figure 1 for RECIST-Net: Lesion detection via grouping keypoints on RECIST-based annotation
Figure 2 for RECIST-Net: Lesion detection via grouping keypoints on RECIST-based annotation
Figure 3 for RECIST-Net: Lesion detection via grouping keypoints on RECIST-based annotation
Figure 4 for RECIST-Net: Lesion detection via grouping keypoints on RECIST-based annotation

Universal lesion detection in computed tomography (CT) images is an important yet challenging task due to the large variations in lesion type, size, shape, and appearance. Considering that data in clinical routine (such as the DeepLesion dataset) are usually annotated with a long and a short diameter according to the standard of Response Evaluation Criteria in Solid Tumors (RECIST) diameters, we propose RECIST-Net, a new approach to lesion detection in which the four extreme points and center point of the RECIST diameters are detected. By detecting a lesion as keypoints, we provide a more conceptually straightforward formulation for detection, and overcome several drawbacks (e.g., requiring extensive effort in designing data-appropriate anchors and losing shape information) of existing bounding-box-based methods while exploring a single-task, one-stage approach compared to other RECIST-based approaches. Experiments show that RECIST-Net achieves a sensitivity of 92.49% at four false positives per image, outperforming other recent methods including those using multi-task learning.

* 5 pages, 3 figures, IEEE ISBI 2021 
Viaarxiv icon

Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation

Mar 30, 2021
Hong-Yu Zhou, Hualuo Liu, Shilei Cao, Dong Wei, Chixiang Lu, Yizhou Yu, Kai Ma, Yefeng Zheng

Figure 1 for Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation
Figure 2 for Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation
Figure 3 for Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation
Figure 4 for Generalized Organ Segmentation by Imitating One-shot Reasoning using Anatomical Correlation

Learning by imitation is one of the most significant abilities of human beings and plays a vital role in human's computational neural system. In medical image analysis, given several exemplars (anchors), experienced radiologist has the ability to delineate unfamiliar organs by imitating the reasoning process learned from existing types of organs. Inspired by this observation, we propose OrganNet which learns a generalized organ concept from a set of annotated organ classes and then transfer this concept to unseen classes. In this paper, we show that such process can be integrated into the one-shot segmentation task which is a very challenging but meaningful topic. We propose pyramid reasoning modules (PRMs) to model the anatomical correlation between anchor and target volumes. In practice, the proposed module first computes a correlation matrix between target and anchor computerized tomography (CT) volumes. Then, this matrix is used to transform the feature representations of both anchor volume and its segmentation mask. Finally, OrganNet learns to fuse the representations from various inputs and predicts segmentation results for target volume. Extensive experiments show that OrganNet can effectively resist the wide variations in organ morphology and produce state-of-the-art results in one-shot segmentation task. Moreover, even when compared with fully-supervised segmentation models, OrganNet is still able to produce satisfying segmentation results.

* IPMI 2021 
Viaarxiv icon

Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation

Oct 19, 2020
Zicong Zhang, Kimerly Powell, Changchang Yin, Shilei Cao, Dani Gonzalez, Yousef Hannawi, Ping Zhang

Figure 1 for Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation
Figure 2 for Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation
Figure 3 for Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation
Figure 4 for Brain Atlas Guided Attention U-Net for White Matter Hyperintensity Segmentation

White Matter Hyperintensities (WMH) are the most common manifestation of cerebral small vessel disease (cSVD) on the brain MRI. Accurate WMH segmentation algorithms are important to determine cSVD burden and its clinical consequences. Most of existing WMH segmentation algorithms require both fluid attenuated inversion recovery (FLAIR) images and T1-weighted images as inputs. However, T1-weighted images are typically not part of standard clinicalscans which are acquired for patients with acute stroke. In this paper, we propose a novel brain atlas guided attention U-Net (BAGAU-Net) that leverages only FLAIR images with a spatially-registered white matter (WM) brain atlas to yield competitive WMH segmentation performance. Specifically, we designed a dual-path segmentation model with two novel connecting mechanisms, namely multi-input attention module (MAM) and attention fusion module (AFM) to fuse the information from two paths for accurate results. Experiments on two publicly available datasets show the effectiveness of the proposed BAGAU-Net. With only FLAIR images and WM brain atlas, BAGAU-Net outperforms the state-of-the-art method with T1-weighted images, paving the way for effective development of WMH segmentation. Availability:https://github.com/Ericzhang1/BAGAU-Net

Viaarxiv icon

Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks

Sep 06, 2020
Zifeng Wang, Rui Wen, Xi Chen, Shilei Cao, Shao-Lun Huang, Buyue Qian, Yefeng Zheng

Figure 1 for Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks
Figure 2 for Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks
Figure 3 for Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks
Figure 4 for Online Disease Self-diagnosis with Inductive Heterogeneous Graph Convolutional Networks

We propose a Healthcare Graph Convolutional Network (HealGCN) to offer disease self-diagnosis service for online users, based on the Electronic Healthcare Records (EHRs). Two main challenges are focused in this paper for online disease self-diagnosis: (1) serving cold-start users via graph convolutional networks and (2) handling scarce clinical description via a symptom retrieval system. To this end, we first organize the EHR data into a heterogeneous graph that is capable of modeling complex interactions among users, symptoms and diseases, and tailor the graph representation learning towards disease diagnosis with an inductive learning paradigm. Then, we build a disease self-diagnosis system with a corresponding EHR Graph-based Symptom Retrieval System (GraphRet) that can search and provide a list of relevant alternative symptoms by tracing the predefined meta-paths. GraphRet helps enrich the seed symptom set through the EHR graph, resulting in better reasoning ability of our HealGCN model, when confronting users with scarce descriptions. At last, we validate our model on a large-scale EHR dataset, the superior performance does confirm our model's effectiveness in practice.

Viaarxiv icon

Superpixel-Guided Label Softening for Medical Image Segmentation

Jul 17, 2020
Hang Li, Dong Wei, Shilei Cao, Kai Ma, Liansheng Wang, Yefeng Zheng

Figure 1 for Superpixel-Guided Label Softening for Medical Image Segmentation
Figure 2 for Superpixel-Guided Label Softening for Medical Image Segmentation
Figure 3 for Superpixel-Guided Label Softening for Medical Image Segmentation
Figure 4 for Superpixel-Guided Label Softening for Medical Image Segmentation

Segmentation of objects of interest is one of the central tasks in medical image analysis, which is indispensable for quantitative analysis. When developing machine-learning based methods for automated segmentation, manual annotations are usually used as the ground truth toward which the models learn to mimic. While the bulky parts of the segmentation targets are relatively easy to label, the peripheral areas are often difficult to handle due to ambiguous boundaries and the partial volume effect, etc., and are likely to be labeled with uncertainty. This uncertainty in labeling may, in turn, result in unsatisfactory performance of the trained models. In this paper, we propose superpixel-based label softening to tackle the above issue. Generated by unsupervised over-segmentation, each superpixel is expected to represent a locally homogeneous area. If a superpixel intersects with the annotation boundary, we consider a high probability of uncertain labeling within this area. Driven by this intuition, we soften labels in this area based on signed distances to the annotation boundary and assign probability values within [0, 1] to them, in comparison with the original "hard", binary labels of either 0 or 1. The softened labels are then used to train the segmentation models together with the hard labels. Experimental results on a brain MRI dataset and an optical coherence tomography dataset demonstrate that this conceptually simple and implementation-wise easy method achieves overall superior segmentation performances to baseline and comparison methods for both 3D and 2D medical images.

Viaarxiv icon

Learning and Exploiting Interclass Visual Correlations for Medical Image Classification

Jul 13, 2020
Dong Wei, Shilei Cao, Kai Ma, Yefeng Zheng

Figure 1 for Learning and Exploiting Interclass Visual Correlations for Medical Image Classification
Figure 2 for Learning and Exploiting Interclass Visual Correlations for Medical Image Classification
Figure 3 for Learning and Exploiting Interclass Visual Correlations for Medical Image Classification
Figure 4 for Learning and Exploiting Interclass Visual Correlations for Medical Image Classification

Deep neural network-based medical image classifications often use "hard" labels for training, where the probability of the correct category is 1 and those of others are 0. However, these hard targets can drive the networks over-confident about their predictions and prone to overfit the training data, affecting model generalization and adaption. Studies have shown that label smoothing and softening can improve classification performance. Nevertheless, existing approaches are either non-data-driven or limited in applicability. In this paper, we present the Class-Correlation Learning Network (CCL-Net) to learn interclass visual correlations from given training data, and produce soft labels to help with classification tasks. Instead of letting the network directly learn the desired correlations, we propose to learn them implicitly via distance metric learning of class-specific embeddings with a lightweight plugin CCL block. An intuitive loss based on a geometrical explanation of correlation is designed for bolstering learning of the interclass correlations. We further present end-to-end training of the proposed CCL block as a plugin head together with the classification backbone while generating soft labels on the fly. Our experimental results on the International Skin Imaging Collaboration 2018 dataset demonstrate effective learning of the interclass correlations from training data, as well as consistent improvements in performance upon several widely used modern network structures with the CCL block.

Viaarxiv icon

LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation

Mar 20, 2020
Shuxin Wang, Shilei Cao, Dong Wei, Renzhen Wang, Kai Ma, Liansheng Wang, Deyu Meng, Yefeng Zheng

Figure 1 for LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation
Figure 2 for LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation
Figure 3 for LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation
Figure 4 for LT-Net: Label Transfer by Learning Reversible Voxel-wise Correspondence for One-shot Medical Image Segmentation

We introduce a one-shot segmentation method to alleviate the burden of manual annotation for medical images. The main idea is to treat one-shot segmentation as a classical atlas-based segmentation problem, where voxel-wise correspondence from the atlas to the unlabelled data is learned. Subsequently, segmentation label of the atlas can be transferred to the unlabelled data with the learned correspondence. However, since ground truth correspondence between images is usually unavailable, the learning system must be well-supervised to avoid mode collapse and convergence failure. To overcome this difficulty, we resort to the forward-backward consistency, which is widely used in correspondence problems, and additionally learn the backward correspondences from the warped atlases back to the original atlas. This cycle-correspondence learning design enables a variety of extra, cycle-consistency-based supervision signals to make the training process stable, while also boost the performance. We demonstrate the superiority of our method over both deep learning-based one-shot segmentation methods and a classical multi-atlas segmentation method via thorough experiments.

* Accepted to Proc. IEEE Conf. Computer Vision and Pattern Recognition 2020 
Viaarxiv icon