Alert button
Picture for Zhenrong Shen

Zhenrong Shen

Alert button

MeLo: Low-rank Adaptation is Better than Fine-tuning for Medical Image Diagnosis

Nov 14, 2023
Yitao Zhu, Zhenrong Shen, Zihao Zhao, Sheng Wang, Xin Wang, Xiangyu Zhao, Dinggang Shen, Qian Wang

The common practice in developing computer-aided diagnosis (CAD) models based on transformer architectures usually involves fine-tuning from ImageNet pre-trained weights. However, with recent advances in large-scale pre-training and the practice of scaling laws, Vision Transformers (ViT) have become much larger and less accessible to medical imaging communities. Additionally, in real-world scenarios, the deployments of multiple CAD models can be troublesome due to problems such as limited storage space and time-consuming model switching. To address these challenges, we propose a new method MeLo (Medical image Low-rank adaptation), which enables the development of a single CAD model for multiple clinical tasks in a lightweight manner. It adopts low-rank adaptation instead of resource-demanding fine-tuning. By fixing the weight of ViT models and only adding small low-rank plug-ins, we achieve competitive results on various diagnosis tasks across different imaging modalities using only a few trainable parameters. Specifically, our proposed method achieves comparable performance to fully fine-tuned ViT models on four distinct medical imaging datasets using about 0.17% trainable parameters. Moreover, MeLo adds only about 0.5MB of storage space and allows for extremely fast model switching in deployment and inference. Our source code and pre-trained weights are available on our website (https://absterzhu.github.io/melo.github.io/).

* 5 pages, 3 figures 
Viaarxiv icon

Uni-COAL: A Unified Framework for Cross-Modality Synthesis and Super-Resolution of MR Images

Nov 14, 2023
Zhiyun Song, Zengxin Qi, Xin Wang, Xiangyu Zhao, Zhenrong Shen, Sheng Wang, Manman Fei, Zhe Wang, Di Zang, Dongdong Chen, Linlin Yao, Qian Wang, Xuehai Wu, Lichi Zhang

Cross-modality synthesis (CMS), super-resolution (SR), and their combination (CMSR) have been extensively studied for magnetic resonance imaging (MRI). Their primary goals are to enhance the imaging quality by synthesizing the desired modality and reducing the slice thickness. Despite the promising synthetic results, these techniques are often tailored to specific tasks, thereby limiting their adaptability to complex clinical scenarios. Therefore, it is crucial to build a unified network that can handle various image synthesis tasks with arbitrary requirements of modality and resolution settings, so that the resources for training and deploying the models can be greatly reduced. However, none of the previous works is capable of performing CMS, SR, and CMSR using a unified network. Moreover, these MRI reconstruction methods often treat alias frequencies improperly, resulting in suboptimal detail restoration. In this paper, we propose a Unified Co-Modulated Alias-free framework (Uni-COAL) to accomplish the aforementioned tasks with a single network. The co-modulation design of the image-conditioned and stochastic attribute representations ensures the consistency between CMS and SR, while simultaneously accommodating arbitrary combinations of input/output modalities and thickness. The generator of Uni-COAL is also designed to be alias-free based on the Shannon-Nyquist signal processing framework, ensuring effective suppression of alias frequencies. Additionally, we leverage the semantic prior of Segment Anything Model (SAM) to guide Uni-COAL, ensuring a more authentic preservation of anatomical structures during synthesis. Experiments on three datasets demonstrate that Uni-COAL outperforms the alternatives in CMS, SR, and CMSR tasks for MR images, which highlights its generalizability to wide-range applications.

Viaarxiv icon

AdLER: Adversarial Training with Label Error Rectification for One-Shot Medical Image Segmentation

Sep 02, 2023
Xiangyu Zhao, Sheng Wang, Zhiyun Song, Zhenrong Shen, Linlin Yao, Haolei Yuan, Qian Wang, Lichi Zhang

Figure 1 for AdLER: Adversarial Training with Label Error Rectification for One-Shot Medical Image Segmentation
Figure 2 for AdLER: Adversarial Training with Label Error Rectification for One-Shot Medical Image Segmentation
Figure 3 for AdLER: Adversarial Training with Label Error Rectification for One-Shot Medical Image Segmentation
Figure 4 for AdLER: Adversarial Training with Label Error Rectification for One-Shot Medical Image Segmentation

Accurate automatic segmentation of medical images typically requires large datasets with high-quality annotations, making it less applicable in clinical settings due to limited training data. One-shot segmentation based on learned transformations (OSSLT) has shown promise when labeled data is extremely limited, typically including unsupervised deformable registration, data augmentation with learned registration, and segmentation learned from augmented data. However, current one-shot segmentation methods are challenged by limited data diversity during augmentation, and potential label errors caused by imperfect registration. To address these issues, we propose a novel one-shot medical image segmentation method with adversarial training and label error rectification (AdLER), with the aim of improving the diversity of generated data and correcting label errors to enhance segmentation performance. Specifically, we implement a novel dual consistency constraint to ensure anatomy-aligned registration that lessens registration errors. Furthermore, we develop an adversarial training strategy to augment the atlas image, which ensures both generation diversity and segmentation robustness. We also propose to rectify potential label errors in the augmented atlas images by estimating segmentation uncertainty, which can compensate for the imperfect nature of deformable registration and improve segmentation authenticity. Experiments on the CANDI and ABIDE datasets demonstrate that the proposed AdLER outperforms previous state-of-the-art methods by 0.7% (CANDI), 3.6% (ABIDE "seen"), and 4.9% (ABIDE "unseen") in segmentation based on Dice scores, respectively. The source code will be available at https://github.com/hsiangyuzhao/AdLER.

Viaarxiv icon

CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification

Jul 12, 2023
Zhenrong Shen, Maosong Cao, Sheng Wang, Lichi Zhang, Qian Wang

Figure 1 for CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification
Figure 2 for CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification
Figure 3 for CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification
Figure 4 for CellGAN: Conditional Cervical Cell Synthesis for Augmenting Cytopathological Image Classification

Automatic examination of thin-prep cytologic test (TCT) slides can assist pathologists in finding cervical abnormality for accurate and efficient cancer screening. Current solutions mostly need to localize suspicious cells and classify abnormality based on local patches, concerning the fact that whole slide images of TCT are extremely large. It thus requires many annotations of normal and abnormal cervical cells, to supervise the training of the patch-level classifier for promising performance. In this paper, we propose CellGAN to synthesize cytopathological images of various cervical cell types for augmenting patch-level cell classification. Built upon a lightweight backbone, CellGAN is equipped with a non-linear class mapping network to effectively incorporate cell type information into image generation. We also propose the Skip-layer Global Context module to model the complex spatial relationship of the cells, and attain high fidelity of the synthesized images through adversarial learning. Our experiments demonstrate that CellGAN can produce visually plausible TCT cytopathological images for different cell types. We also validate the effectiveness of using CellGAN to greatly augment patch-level cell classification performance.

* 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023)  
Viaarxiv icon

Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion

Apr 18, 2023
Xin Wang, Zhenrong Shen, Zhiyun Song, Sheng Wang, Mengjun Liu, Lichi Zhang, Kai Xuan, Qian Wang

Figure 1 for Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion
Figure 2 for Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion
Figure 3 for Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion
Figure 4 for Arbitrary Reduction of MRI Inter-slice Spacing Using Hierarchical Feature Conditional Diffusion

Magnetic resonance (MR) images collected in 2D scanning protocols typically have large inter-slice spacing, resulting in high in-plane resolution but reduced through-plane resolution. Super-resolution techniques can reduce the inter-slice spacing of 2D scanned MR images, facilitating the downstream visual experience and computer-aided diagnosis. However, most existing super-resolution methods are trained at a fixed scaling ratio, which is inconvenient in clinical settings where MR scanning may have varying inter-slice spacings. To solve this issue, we propose Hierarchical Feature Conditional Diffusion (HiFi-Diff)} for arbitrary reduction of MR inter-slice spacing. Given two adjacent MR slices and the relative positional offset, HiFi-Diff can iteratively convert a Gaussian noise map into any desired in-between MR slice. Furthermore, to enable fine-grained conditioning, the Hierarchical Feature Extraction (HiFE) module is proposed to hierarchically extract conditional features and conduct element-wise modulation. Our experimental results on the publicly available HCP-1200 dataset demonstrate the high-fidelity super-resolution capability of HiFi-Diff and its efficacy in enhancing downstream segmentation performance.

* not the time 
Viaarxiv icon

TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation

Aug 12, 2022
Xiangyu Zhao, Di Zang, Sheng Wang, Zhenrong Shen, Kai Xuan, Zeyu Wei, Zhe Wang, Ruizhe Zheng, Xuehai Wu, Zheren Li, Qian Wang, Zengxin Qi, Lichi Zhang

Figure 1 for TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation
Figure 2 for TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation
Figure 3 for TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation
Figure 4 for TBI-GAN: An Adversarial Learning Approach for Data Synthesis on Traumatic Brain Segmentation

Brain network analysis for traumatic brain injury (TBI) patients is critical for its consciousness level assessment and prognosis evaluation, which requires the segmentation of certain consciousness-related brain regions. However, it is difficult to construct a TBI segmentation model as manually annotated MR scans of TBI patients are hard to collect. Data augmentation techniques can be applied to alleviate the issue of data scarcity. However, conventional data augmentation strategies such as spatial and intensity transformation are unable to mimic the deformation and lesions in traumatic brains, which limits the performance of the subsequent segmentation task. To address these issues, we propose a novel medical image inpainting model named TBI-GAN to synthesize TBI MR scans with paired brain label maps. The main strength of our TBI-GAN method is that it can generate TBI images and corresponding label maps simultaneously, which has not been achieved in the previous inpainting methods for medical images. We first generate the inpainted image under the guidance of edge information following a coarse-to-fine manner, and then the synthesized intensity image is used as the prior for label inpainting. Furthermore, we introduce a registration-based template augmentation pipeline to increase the diversity of the synthesized image pairs and enhance the capacity of data augmentation. Experimental results show that the proposed TBI-GAN method can produce sufficient synthesized TBI images with high quality and valid label maps, which can greatly improve the 2D and 3D traumatic brain segmentation performance compared with the alternatives.

Viaarxiv icon

Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule Augmentation and Detection

Jul 19, 2022
Zhenrong Shen, Xi Ouyang, Bin Xiao, Jie-Zhi Cheng, Qian Wang, Dinggang Shen

Figure 1 for Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule Augmentation and Detection
Figure 2 for Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule Augmentation and Detection
Figure 3 for Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule Augmentation and Detection
Figure 4 for Image Synthesis with Disentangled Attributes for Chest X-Ray Nodule Augmentation and Detection

Lung nodule detection in chest X-ray (CXR) images is common to early screening of lung cancers. Deep-learning-based Computer-Assisted Diagnosis (CAD) systems can support radiologists for nodule screening in CXR. However, it requires large-scale and diverse medical data with high-quality annotations to train such robust and accurate CADs. To alleviate the limited availability of such datasets, lung nodule synthesis methods are proposed for the sake of data augmentation. Nevertheless, previous methods lack the ability to generate nodules that are realistic with the size attribute desired by the detector. To address this issue, we introduce a novel lung nodule synthesis framework in this paper, which decomposes nodule attributes into three main aspects including shape, size, and texture, respectively. A GAN-based Shape Generator firstly models nodule shapes by generating diverse shape masks. The following Size Modulation then enables quantitative control on the diameters of the generated nodule shapes in pixel-level granularity. A coarse-to-fine gated convolutional Texture Generator finally synthesizes visually plausible nodule textures conditioned on the modulated shape masks. Moreover, we propose to synthesize nodule CXR images by controlling the disentangled nodule attributes for data augmentation, in order to better compensate for the nodules that are easily missed in the detection task. Our experiments demonstrate the enhanced image quality, diversity, and controllability of the proposed lung nodule synthesis framework. We also validate the effectiveness of our data augmentation on greatly improving nodule detection performance.

Viaarxiv icon