Alert button
Picture for Jiancheng Yang

Jiancheng Yang

Alert button

Efficient Anatomical labeling of Pulmonary Tree Structures via Implicit Point-Graph Networks

Sep 29, 2023
Kangxian Xie, Jiancheng Yang, Donglai Wei, Ziqiao Weng, Pascal Fua

Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the many complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. In theory, they can be modeled using high-resolution image stacks. Unfortunately, standard CNN approaches operating on dense voxel grids are prohibitively expensive. To remedy this, we introduce a point-based approach that preserves graph connectivity of tree skeleton and incorporates an implicit surface representation. It delivers SOTA accuracy at a low computational cost and the resulting models have usable surfaces. Due to the scarcity of publicly accessible data, we have also curated an extensive dataset to evaluate our approach and will make it public.

Viaarxiv icon

Scale-aware Test-time Click Adaptation for Pulmonary Nodule and Mass Segmentation

Jul 28, 2023
Zhihao Li, Jiancheng Yang, Yongchao Xu, Li Zhang, Wenhui Dong, Bo Du

Figure 1 for Scale-aware Test-time Click Adaptation for Pulmonary Nodule and Mass Segmentation
Figure 2 for Scale-aware Test-time Click Adaptation for Pulmonary Nodule and Mass Segmentation
Figure 3 for Scale-aware Test-time Click Adaptation for Pulmonary Nodule and Mass Segmentation
Figure 4 for Scale-aware Test-time Click Adaptation for Pulmonary Nodule and Mass Segmentation

Pulmonary nodules and masses are crucial imaging features in lung cancer screening that require careful management in clinical diagnosis. Despite the success of deep learning-based medical image segmentation, the robust performance on various sizes of lesions of nodule and mass is still challenging. In this paper, we propose a multi-scale neural network with scale-aware test-time adaptation to address this challenge. Specifically, we introduce an adaptive Scale-aware Test-time Click Adaptation method based on effortlessly obtainable lesion clicks as test-time cues to enhance segmentation performance, particularly for large lesions. The proposed method can be seamlessly integrated into existing networks. Extensive experiments on both open-source and in-house datasets consistently demonstrate the effectiveness of the proposed method over some CNN and Transformer-based segmentation methods. Our code is available at https://github.com/SplinterLi/SaTTCA

* 11 pages, 3 figures, MICCAI 2023 
Viaarxiv icon

Enforcing Topological Interaction between Implicit Surfaces via Uniform Sampling

Jul 16, 2023
Hieu Le, Nicolas Talabot, Jiancheng Yang, Pascal Fua

Objects interact with each other in various ways, including containment, contact, or maintaining fixed distances. Ensuring these topological interactions is crucial for accurate modeling in many scenarios. In this paper, we propose a novel method to refine 3D object representations, ensuring that their surfaces adhere to a topological prior. Our key observation is that the object interaction can be observed via a stochastic approximation method: the statistic of signed distances between a large number of random points to the object surfaces reflect the interaction between them. Thus, the object interaction can be indirectly manipulated by using choosing a set of points as anchors to refine the object surfaces. In particular, we show that our method can be used to enforce two objects to have a specific contact ratio while having no surface intersection. The conducted experiments show that our proposed method enables accurate 3D reconstruction of human hearts, ensuring proper topological connectivity between components. Further, we show that our proposed method can be used to simulate various ways a hand can interact with an arbitrary object.

Viaarxiv icon

Topology Repairing of Disconnected Pulmonary Airways and Vessels: Baselines and a Dataset

Jun 28, 2023
Ziqiao Weng, Jiancheng Yang, Dongnan Liu, Weidong Cai

Figure 1 for Topology Repairing of Disconnected Pulmonary Airways and Vessels: Baselines and a Dataset
Figure 2 for Topology Repairing of Disconnected Pulmonary Airways and Vessels: Baselines and a Dataset
Figure 3 for Topology Repairing of Disconnected Pulmonary Airways and Vessels: Baselines and a Dataset

Accurate segmentation of pulmonary airways and vessels is crucial for the diagnosis and treatment of pulmonary diseases. However, current deep learning approaches suffer from disconnectivity issues that hinder their clinical usefulness. To address this challenge, we propose a post-processing approach that leverages a data-driven method to repair the topology of disconnected pulmonary tubular structures. Our approach formulates the problem as a keypoint detection task, where a neural network is trained to predict keypoints that can bridge disconnected components. We use a training data synthesis pipeline that generates disconnected data from complete pulmonary structures. Moreover, the new Pulmonary Tree Repairing (PTR) dataset is publicly available, which comprises 800 complete 3D models of pulmonary airways, arteries, and veins, as well as the synthetic disconnected data. Our code and data are available at https://github.com/M3DV/pulmonary-tree-repairing.

* MICCAI 2023 Early Accepted 
Viaarxiv icon

The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases

Jun 11, 2023
Jiancheng Yang, Hongwei Bran Li, Donglai Wei

Figure 1 for The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases
Figure 2 for The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases
Figure 3 for The Impact of ChatGPT and LLMs on Medical Imaging Stakeholders: Perspectives and Use Cases

This study investigates the transformative potential of Large Language Models (LLMs), such as OpenAI ChatGPT, in medical imaging. With the aid of public data, these models, which possess remarkable language understanding and generation capabilities, are augmenting the interpretive skills of radiologists, enhancing patient-physician communication, and streamlining clinical workflows. The paper introduces an analytic framework for presenting the complex interactions between LLMs and the broader ecosystem of medical imaging stakeholders, including businesses, insurance entities, governments, research institutions, and hospitals (nicknamed BIGR-H). Through detailed analyses, illustrative use cases, and discussions on the broader implications and future directions, this perspective seeks to raise discussion in strategic planning and decision-making in the era of AI-enabled healthcare.

Viaarxiv icon

Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation

Mar 10, 2023
Minghui Zhang, Yangqian Wu, Hanxiao Zhang, Yulei Qin, Hao Zheng, Wen Tang, Corey Arnold, Chenhao Pei, Pengxin Yu, Yang Nan, Guang Yang, Simon Walsh, Dominic C. Marshall, Matthieu Komorowski, Puyang Wang, Dazhou Guo, Dakai Jin, Ya'nan Wu, Shuiqing Zhao, Runsheng Chang, Boyu Zhang, Xing Lv, Abdul Qayyum, Moona Mazher, Qi Su, Yonghuang Wu, Ying'ao Liu, Yufei Zhu, Jiancheng Yang, Ashkan Pakzad, Bojidar Rangelov, Raul San Jose Estepar, Carlos Cano Espinosa, Jiayuan Sun, Guang-Zhong Yang, Yun Gu

Figure 1 for Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation
Figure 2 for Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation
Figure 3 for Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation
Figure 4 for Multi-site, Multi-domain Airway Tree Modeling (ATM'22): A Public Benchmark for Pulmonary Airway Segmentation

Open international challenges are becoming the de facto standard for assessing computer vision and image analysis algorithms. In recent years, new methods have extended the reach of pulmonary airway segmentation that is closer to the limit of image resolution. Since EXACT'09 pulmonary airway segmentation, limited effort has been directed to quantitative comparison of newly emerged algorithms driven by the maturity of deep learning based approaches and clinical drive for resolving finer details of distal airways for early intervention of pulmonary diseases. Thus far, public annotated datasets are extremely limited, hindering the development of data-driven methods and detailed performance evaluation of new algorithms. To provide a benchmark for the medical imaging community, we organized the Multi-site, Multi-domain Airway Tree Modeling (ATM'22), which was held as an official challenge event during the MICCAI 2022 conference. ATM'22 provides large-scale CT scans with detailed pulmonary airway annotation, including 500 CT scans (300 for training, 50 for validation, and 150 for testing). The dataset was collected from different sites and it further included a portion of noisy COVID-19 CTs with ground-glass opacity and consolidation. Twenty-three teams participated in the entire phase of the challenge and the algorithms for the top ten teams are reviewed in this paper. Quantitative and qualitative results revealed that deep learning models embedded with the topological continuity enhancement achieved superior performance in general. ATM'22 challenge holds as an open-call design, the training data and the gold standard evaluation are available upon successful registration via its homepage.

* 32 pages, 16 figures. Homepage: https://atm22.grand-challenge.org/. Submitted 
Viaarxiv icon

SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention

Mar 07, 2023
Rui Xu, Zhi Liu, Yong Luo, Han Hu, Li Shen, Bo Du, Kaiming Kuang, Jiancheng Yang

Figure 1 for SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention
Figure 2 for SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention
Figure 3 for SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention
Figure 4 for SGDA: Towards 3D Universal Pulmonary Nodule Detection via Slice Grouped Domain Attention

Lung cancer is the leading cause of cancer death worldwide. The best solution for lung cancer is to diagnose the pulmonary nodules in the early stage, which is usually accomplished with the aid of thoracic computed tomography (CT). As deep learning thrives, convolutional neural networks (CNNs) have been introduced into pulmonary nodule detection to help doctors in this labor-intensive task and demonstrated to be very effective. However, the current pulmonary nodule detection methods are usually domain-specific, and cannot satisfy the requirement of working in diverse real-world scenarios. To address this issue, we propose a slice grouped domain attention (SGDA) module to enhance the generalization capability of the pulmonary nodule detection networks. This attention module works in the axial, coronal, and sagittal directions. In each direction, we divide the input feature into groups, and for each group, we utilize a universal adapter bank to capture the feature subspaces of the domains spanned by all pulmonary nodule datasets. Then the bank outputs are combined from the perspective of domain to modulate the input group. Extensive experiments demonstrate that SGDA enables substantially better multi-domain pulmonary nodule detection performance compared with the state-of-the-art multi-domain learning methods.

* Accepted by IEEE/ACM Transactions on Computational Biology and Bioinformatics 
Viaarxiv icon

ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations

Jan 18, 2023
Chinmay Prabhakar, Hongwei Bran Li, Jiancheng Yang, Suprosana Shit, Benedikt Wiestler, Bjoern Menze

Figure 1 for ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations
Figure 2 for ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations
Figure 3 for ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations
Figure 4 for ViT-AE++: Improving Vision Transformer Autoencoder for Self-supervised Medical Image Representations

Self-supervised learning has attracted increasing attention as it learns data-driven representation from data without annotations. Vision transformer-based autoencoder (ViT-AE) by He et al. (2021) is a recent self-supervised learning technique that employs a patch-masking strategy to learn a meaningful latent space. In this paper, we focus on improving ViT-AE (nicknamed ViT-AE++) for a more effective representation of both 2D and 3D medical images. We propose two new loss functions to enhance the representation during the training stage. The first loss term aims to improve self-reconstruction by considering the structured dependencies and hence indirectly improving the representation. The second loss term leverages contrastive loss to directly optimize the representation from two randomly masked views. As an independent contribution, we extended ViT-AE++ to a 3D fashion for volumetric medical images. We extensively evaluate ViT-AE++ on both natural images and medical images, demonstrating consistent improvement over vanilla ViT-AE and its superiority over other contrastive learning approaches.

* under review. C. Prabhakar and H. B. Li contribute equally. Codes will be available soon 
Viaarxiv icon

RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction

Oct 18, 2022
Liang Jin, Shixuan Gu, Donglai Wei, Kaiming Kuang, Hanspeter Pfister, Bingbing Ni, Jiancheng Yang, Ming Li

Figure 1 for RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction
Figure 2 for RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction
Figure 3 for RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction
Figure 4 for RibSeg v2: A Large-scale Benchmark for Rib Labeling and Anatomical Centerline Extraction

Automatic rib labeling and anatomical centerline extraction are common prerequisites for various clinical applications. Prior studies either use in-house datasets that are inaccessible to communities, or focus on rib segmentation that neglects the clinical significance of rib labeling. To address these issues, we extend our prior dataset (RibSeg) on the binary rib segmentation task to a comprehensive benchmark, named RibSeg v2, with 660 CT scans (15,466 individual ribs in total) and annotations manually inspected by experts for rib labeling and anatomical centerline extraction. Based on the RibSeg v2, we develop a pipeline including deep learning-based methods for rib labeling, and a skeletonization-based method for centerline extraction. To improve computational efficiency, we propose a sparse point cloud representation of CT scans and compare it with standard dense voxel grids. Moreover, we design and analyze evaluation metrics to address the key challenges of each task. Our dataset, code, and model are available online to facilitate open research at https://github.com/M3DV/RibSeg

* 10 pages, 6 figures, journal 
Viaarxiv icon