Alert button
Picture for Boah Kim

Boah Kim

Alert button

C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation

Jul 31, 2023
Boah Kim, Yujin Oh, Bradford J. Wood, Ronald M. Summers, Jong Chul Ye

Figure 1 for C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation
Figure 2 for C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation
Figure 3 for C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation
Figure 4 for C-DARL: Contrastive diffusion adversarial representation learning for label-free blood vessel segmentation

Blood vessel segmentation in medical imaging is one of the essential steps for vascular disease diagnosis and interventional planning in a broad spectrum of clinical scenarios in image-based medicine and interventional medicine. Unfortunately, manual annotation of the vessel masks is challenging and resource-intensive due to subtle branches and complex structures. To overcome this issue, this paper presents a self-supervised vessel segmentation method, dubbed the contrastive diffusion adversarial representation learning (C-DARL) model. Our model is composed of a diffusion module and a generation module that learns the distribution of multi-domain blood vessel data by generating synthetic vessel images from diffusion latent. Moreover, we employ contrastive learning through a mask-based contrastive loss so that the model can learn more realistic vessel representations. To validate the efficacy, C-DARL is trained using various vessel datasets, including coronary angiograms, abdominal digital subtraction angiograms, and retinal imaging. Experimental results confirm that our model achieves performance improvement over baseline methods with noise robustness, suggesting the effectiveness of C-DARL for vessel segmentation.

Viaarxiv icon

Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation

Sep 29, 2022
Boah Kim, Yujin Oh, Jong Chul Ye

Figure 1 for Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation
Figure 2 for Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation
Figure 3 for Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation
Figure 4 for Diffusion Adversarial Representation Learning for Self-supervised Vessel Segmentation

Vessel segmentation in medical images is one of the important tasks in the diagnosis of vascular diseases and therapy planning. Although learning-based segmentation approaches have been extensively studied, a large amount of ground-truth labels are required in supervised methods and confusing background structures make neural networks hard to segment vessels in an unsupervised manner. To address this, here we introduce a novel diffusion adversarial representation learning (DARL) model that leverages a denoising diffusion probabilistic model with adversarial learning, and apply it for vessel segmentation. In particular, for self-supervised vessel segmentation, DARL learns background image distribution using a diffusion module, which lets a generation module effectively provide vessel representations. Also, by adversarial learning based on the proposed switchable spatially-adaptive denormalization, our model estimates synthetic fake vessel images as well as vessel segmentation masks, which further makes the model capture vessel-relevant semantic information. Once the proposed model is trained, the model generates segmentation masks by one step and can be applied to general vascular structure segmentation of coronary angiography and retinal images. Experimental results on various datasets show that our method significantly outperforms existing unsupervised and self-supervised methods in vessel segmentation.

Viaarxiv icon

Diffusion Deformable Model for 4D Temporal Medical Image Generation

Jun 27, 2022
Boah Kim, Jong Chul Ye

Figure 1 for Diffusion Deformable Model for 4D Temporal Medical Image Generation
Figure 2 for Diffusion Deformable Model for 4D Temporal Medical Image Generation
Figure 3 for Diffusion Deformable Model for 4D Temporal Medical Image Generation
Figure 4 for Diffusion Deformable Model for 4D Temporal Medical Image Generation

Temporal volume images with 3D+t (4D) information are often used in medical imaging to statistically analyze temporal dynamics or capture disease progression. Although deep-learning-based generative models for natural images have been extensively studied, approaches for temporal medical image generation such as 4D cardiac volume data are limited. In this work, we present a novel deep learning model that generates intermediate temporal volumes between source and target volumes. Specifically, we propose a diffusion deformable model (DDM) by adapting the denoising diffusion probabilistic model that has recently been widely investigated for realistic image generation. Our proposed DDM is composed of the diffusion and the deformation modules so that DDM can learn spatial deformation information between the source and target volumes and provide a latent code for generating intermediate frames along a geodesic path. Once our model is trained, the latent code estimated from the diffusion module is simply interpolated and fed into the deformation module, which enables DDM to generate temporal frames along the continuous trajectory while preserving the topology of the source image. We demonstrate the proposed method with the 4D cardiac MR image generation between the diastolic and systolic phases for each subject. Compared to the existing deformation methods, our DDM achieves high performance on temporal volume generation.

* Accepted for MICCAI 2022 
Viaarxiv icon

DiffuseMorph: Unsupervised Deformable Image Registration Along Continuous Trajectory Using Diffusion Models

Dec 09, 2021
Boah Kim, Inhwa Han, Jong Chul Ye

Figure 1 for DiffuseMorph: Unsupervised Deformable Image Registration Along Continuous Trajectory Using Diffusion Models
Figure 2 for DiffuseMorph: Unsupervised Deformable Image Registration Along Continuous Trajectory Using Diffusion Models
Figure 3 for DiffuseMorph: Unsupervised Deformable Image Registration Along Continuous Trajectory Using Diffusion Models
Figure 4 for DiffuseMorph: Unsupervised Deformable Image Registration Along Continuous Trajectory Using Diffusion Models

Deformable image registration is one of the fundamental tasks for medical imaging and computer vision. Classical registration algorithms usually rely on iterative optimization approaches to provide accurate deformation, which requires high computational cost. Although many deep-learning-based methods have been developed to carry out fast image registration, it is still challenging to estimate the deformation field with less topological folding problem. Furthermore, these approaches only enable registration to a single fixed image, and it is not possible to obtain continuously varying registration results between the moving and fixed images. To address this, here we present a novel approach of diffusion model-based probabilistic image registration, called DiffuseMorph. Specifically, our model learns the score function of the deformation between moving and fixed images. Similar to the existing diffusion models, DiffuseMorph not only provides synthetic deformed images through a reverse diffusion process, but also enables various levels of deformation of the moving image along with the latent space. Experimental results on 2D face expression image and 3D brain image registration tasks demonstrate that our method can provide flexible and accurate deformation with a capability of topology preservation.

Viaarxiv icon

Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

Nov 03, 2021
Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

Figure 1 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 2 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 3 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training
Figure 4 for Federated Split Vision Transformer for COVID-19 CXR Diagnosis using Task-Agnostic Training

Federated learning, which shares the weights of the neural network across clients, is gaining attention in the healthcare sector as it enables training on a large corpus of decentralized data while maintaining data privacy. For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals. Unfortunately, the exchange of the weights quickly consumes the network bandwidth if highly expressive network architecture is employed. So-called split learning partially solves this problem by dividing a neural network into a client and a server part, so that the client part of the network takes up less extensive computation resources and bandwidth. However, it is not clear how to find the optimal split without sacrificing the overall network performance. To amalgamate these methods and thereby maximize their distinct strengths, here we show that the Vision Transformer, a recently developed deep learning architecture with straightforward decomposable configuration, is ideally suitable for split learning without sacrificing performance. Even under the non-independent and identically distributed data distribution which emulates a real collaboration between hospitals using CXR datasets from multiple sources, the proposed framework was able to attain performance comparable to data-centralized training. In addition, the proposed framework along with heterogeneous multi-task clients also improves individual task performances including the diagnosis of COVID-19, eliminating the need for sharing large weights with innumerable parameters. Our results affirm the suitability of Transformer for collaborative learning in medical imaging and pave the way forward for future real-world implementations.

* Accepted for NeurIPS 2021 
Viaarxiv icon

Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training

Nov 02, 2021
Sangjoon Park, Gwanghyun Kim, Jeongsol Kim, Boah Kim, Jong Chul Ye

Figure 1 for Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training
Figure 2 for Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training
Figure 3 for Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training
Figure 4 for Federated Split Vision Transformer for COVID-19CXR Diagnosis using Task-Agnostic Training

Federated learning, which shares the weights of the neural network across clients, is gaining attention in the healthcare sector as it enables training on a large corpus of decentralized data while maintaining data privacy. For example, this enables neural network training for COVID-19 diagnosis on chest X-ray (CXR) images without collecting patient CXR data across multiple hospitals. Unfortunately, the exchange of the weights quickly consumes the network bandwidth if highly expressive network architecture is employed. So-called split learning partially solves this problem by dividing a neural network into a client and a server part, so that the client part of the network takes up less extensive computation resources and bandwidth. However, it is not clear how to find the optimal split without sacrificing the overall network performance. To amalgamate these methods and thereby maximize their distinct strengths, here we show that the Vision Transformer, a recently developed deep learning architecture with straightforward decomposable configuration, is ideally suitable for split learning without sacrificing performance. Even under the non-independent and identically distributed data distribution which emulates a real collaboration between hospitals using CXR datasets from multiple sources, the proposed framework was able to attain performance comparable to data-centralized training. In addition, the proposed framework along with heterogeneous multi-task clients also improves individual task performances including the diagnosis of COVID-19, eliminating the need for sharing large weights with innumerable parameters. Our results affirm the suitability of Transformer for collaborative learning in medical imaging and pave the way forward for future real-world implementations.

Viaarxiv icon

CycleMorph: Cycle Consistent Unsupervised Deformable Image Registration

Aug 13, 2020
Boah Kim, Dong Hwan Kim, Seong Ho Park, Jieun Kim, June-Goo Lee, Jong Chul Ye

Figure 1 for CycleMorph: Cycle Consistent Unsupervised Deformable Image Registration
Figure 2 for CycleMorph: Cycle Consistent Unsupervised Deformable Image Registration
Figure 3 for CycleMorph: Cycle Consistent Unsupervised Deformable Image Registration
Figure 4 for CycleMorph: Cycle Consistent Unsupervised Deformable Image Registration

Image registration is a fundamental task in medical image analysis. Recently, deep learning based image registration methods have been extensively investigated due to their excellent performance despite the ultra-fast computational time. However, the existing deep learning methods still have limitation in the preservation of original topology during the deformation with registration vector fields. To address this issues, here we present a cycle-consistent deformable image registration. The cycle consistency enhances image registration performance by providing an implicit regularization to preserve topology during the deformation. The proposed method is so flexible that can be applied for both 2D and 3D registration problems for various applications, and can be easily extended to multi-scale implementation to deal with the memory issues in large volume registration. Experimental results on various datasets from medical and non-medical applications demonstrate that the proposed method provides effective and accurate registration on diverse image pairs within a few seconds. Qualitative and quantitative evaluations on deformation fields also verify the effectiveness of the cycle consistency of the proposed method.

Viaarxiv icon

Unsupervised Deformable Image Registration Using Cycle-Consistent CNN

Jul 02, 2019
Boah Kim, Jieun Kim, June-Goo Lee, Dong Hwan Kim, Seong Ho Park, Jong Chul Ye

Figure 1 for Unsupervised Deformable Image Registration Using Cycle-Consistent CNN
Figure 2 for Unsupervised Deformable Image Registration Using Cycle-Consistent CNN
Figure 3 for Unsupervised Deformable Image Registration Using Cycle-Consistent CNN
Figure 4 for Unsupervised Deformable Image Registration Using Cycle-Consistent CNN

Medical image registration is one of the key processing steps for biomedical image analysis such as cancer diagnosis. Recently, deep learning based supervised and unsupervised image registration methods have been extensively studied due to its excellent performance in spite of ultra-fast computational time compared to the classical approaches. In this paper, we present a novel unsupervised medical image registration method that trains deep neural network for deformable registration of 3D volumes using a cycle-consistency. Thanks to the cycle consistency, the proposed deep neural networks can take diverse pair of image data with severe deformation for accurate registration. Experimental results using multiphase liver CT images demonstrate that our method provides very precise 3D image registration within a few seconds, resulting in more accurate cancer size estimation.

* accepted for MICCAI 2019 
Viaarxiv icon

Multiphase Level-Set Loss for Semi-Supervised and Unsupervised Segmentation with Deep Learning

Apr 05, 2019
Boah Kim, Jong Chul Ye

Figure 1 for Multiphase Level-Set Loss for Semi-Supervised and Unsupervised Segmentation with Deep Learning
Figure 2 for Multiphase Level-Set Loss for Semi-Supervised and Unsupervised Segmentation with Deep Learning
Figure 3 for Multiphase Level-Set Loss for Semi-Supervised and Unsupervised Segmentation with Deep Learning
Figure 4 for Multiphase Level-Set Loss for Semi-Supervised and Unsupervised Segmentation with Deep Learning

Recent state-of-the-art image segmentation algorithms are mostly based on deep neural network, thanks to its high performance and fast computation time. However, these methods are usually trained in a supervised manner, which requires large number of high quality ground-truth segmentation masks. On the other hand, classical image segmentation approaches such as level-set methods are still useful to help generation of segmentation masks without labels, but these algorithms are usually computationally expensive and often have limitation in semantic segmentation. In this paper, we propose a novel multiphase level-set loss function for deep learning-based semantic image segmentation without or with small labeled data. This loss function is based on the observation that the softmax layer of deep neural networks has striking similarity to the characteristic function in the classical multiphase level-set algorithms. We show that the multiphase level-set loss function enables semi-supervised or even unsupervised semantic segmentation. In addition, our loss function can be also used as a regularized function to enhance supervised semantic segmentation algorithms. Experimental results on multiple datasets demonstrate the effectiveness of the proposed method.

* 9 pages, 6 figures, and 3 tabels 
Viaarxiv icon