Alert button
Picture for Soochahn Lee

Soochahn Lee

Alert button

Generation of Structurally Realistic Retinal Fundus Images with Diffusion Models

May 11, 2023
Sojung Go, Younghoon Ji, Sang Jun Park, Soochahn Lee

Figure 1 for Generation of Structurally Realistic Retinal Fundus Images with Diffusion Models
Figure 2 for Generation of Structurally Realistic Retinal Fundus Images with Diffusion Models
Figure 3 for Generation of Structurally Realistic Retinal Fundus Images with Diffusion Models
Figure 4 for Generation of Structurally Realistic Retinal Fundus Images with Diffusion Models

We introduce a new technique for generating retinal fundus images that have anatomically accurate vascular structures, using diffusion models. We generate artery/vein masks to create the vascular structure, which we then condition to produce retinal fundus images. The proposed method can generate high-quality images with more realistic vascular structures and can create a diverse range of images based on the strengths of the diffusion model. We present quantitative evaluations that demonstrate the performance improvement using our method for data augmentation on vessel segmentation and artery/vein classification. We also present Turing test results by clinical experts, showing that our generated images are difficult to distinguish with real images. We believe that our method can be applied to construct stand-alone datasets that are irrelevant of patient privacy.

* 9 pages, 6 figures 
Viaarxiv icon

Extraction of Coronary Vessels in Fluoroscopic X-Ray Sequences Using Vessel Correspondence Optimization

Jul 28, 2022
Seung Yeon Shin, Soochahn Lee, Kyoung Jin Noh, Il Dong Yun, Kyoung Mu Lee

We present a method to extract coronary vessels from fluoroscopic x-ray sequences. Given the vessel structure for the source frame, vessel correspondence candidates in the subsequent frame are generated by a novel hierarchical search scheme to overcome the aperture problem. Optimal correspondences are determined within a Markov random field optimization framework. Post-processing is performed to extract vessel branches newly visible due to the inflow of contrast agent. Quantitative and qualitative evaluation conducted on a dataset of 18 sequences demonstrates the effectiveness of the proposed method.

* MICCAI 2016 
Viaarxiv icon

Generative Residual Attention Network for Disease Detection

Oct 25, 2021
Euyoung Kim, Soochahn Lee, Kyoung Mu Lee

Figure 1 for Generative Residual Attention Network for Disease Detection
Figure 2 for Generative Residual Attention Network for Disease Detection
Figure 3 for Generative Residual Attention Network for Disease Detection
Figure 4 for Generative Residual Attention Network for Disease Detection

Accurate identification and localization of abnormalities from radiology images serve as a critical role in computer-aided diagnosis (CAD) systems. Building a highly generalizable system usually requires a large amount of data with high-quality annotations, including disease-specific global and localization information. However, in medical images, only a limited number of high-quality images and annotations are available due to annotation expenses. In this paper, we explore this problem by presenting a novel approach for disease generation in X-rays using a conditional generative adversarial learning. Specifically, given a chest X-ray image from a source domain, we generate a corresponding radiology image in a target domain while preserving the identity of the patient. We then use the generated X-ray image in the target domain to augment our training to improve the detection performance. We also present a unified framework that simultaneously performs disease generation and localization.We evaluate the proposed approach on the X-ray image dataset provided by the Radiological Society of North America (RSNA), surpassing the state-of-the-art baseline detection algorithms.

* The paper is about Pneumonia detection using Generative Modeling. It proposes a novel approach to construct pseudo-pair images and a GAN to generate radio-realistic Chest Xray images. Then, the paper propose to leverage the differences between the input and the generated Xray images as an additional attention-map to boost the performance in Pneumonia detection 
Viaarxiv icon

Scale Space Approximation in Convolutional Neural Networks for Retinal Vessel Segmentation

Oct 18, 2018
Kyoung Jin Noh, Sang Jun Park, Soochahn Lee

Figure 1 for Scale Space Approximation in Convolutional Neural Networks for Retinal Vessel Segmentation
Figure 2 for Scale Space Approximation in Convolutional Neural Networks for Retinal Vessel Segmentation
Figure 3 for Scale Space Approximation in Convolutional Neural Networks for Retinal Vessel Segmentation
Figure 4 for Scale Space Approximation in Convolutional Neural Networks for Retinal Vessel Segmentation

Retinal images have the highest resolution and clarity among medical images. Thus, vessel analysis in retinal images may facilitate early diagnosis and treatment of many chronic diseases. In this paper, we propose a novel multi-scale residual convolutional neural network structure based on a \emph{scale-space approximation (SSA)} block of layers, comprising subsampling and subsequent upsampling, for multi-scale representation. Through analysis in the frequency domain, we show that this block structure is a close approximation of Gaussian filtering, the operation to achieve scale variations in scale-space theory. Experimental evaluations demonstrate that the proposed network outperforms current state-of-the-art methods. Ablative analysis shows that the SSA is indeed an important factor in performance improvement.

* 10 pages, 7 figures 
Viaarxiv icon

Deep Vessel Segmentation By Learning Graphical Connectivity

Jun 06, 2018
Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee

Figure 1 for Deep Vessel Segmentation By Learning Graphical Connectivity
Figure 2 for Deep Vessel Segmentation By Learning Graphical Connectivity
Figure 3 for Deep Vessel Segmentation By Learning Graphical Connectivity
Figure 4 for Deep Vessel Segmentation By Learning Graphical Connectivity

We propose a novel deep-learning-based system for vessel segmentation. Existing methods using CNNs have mostly relied on local appearances learned on the regular image grid, without considering the graphical structure of vessel shape. To address this, we incorporate a graph convolutional network into a unified CNN architecture, where the final segmentation is inferred by combining the different types of features. The proposed method can be applied to expand any type of CNN-based vessel segmentation method to enhance the performance. Experiments show that the proposed method outperforms the current state-of-the-art methods on two retinal image datasets as well as a coronary artery X-ray angiography dataset.

Viaarxiv icon

Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images

Oct 10, 2017
Seung Yeon Shin, Soochahn Lee, Il Dong Yun, Kyoung Mu Lee

Figure 1 for Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images
Figure 2 for Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images
Figure 3 for Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images
Figure 4 for Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images

We propose a framework for localization and classification of masses in breast ultrasound (BUS) images. In particular, we simultaneously use a weakly annotated dataset and a relatively small strongly annotated dataset to train a convolutional neural network detector. We have experimentally found that mass detectors trained with small, strongly annotated datasets are easily overfitted, whereas those trained with large, weakly annotated datasets present a non-trivial problem. To overcome these problems, we jointly use datasets with different characteristics in a hybrid manner. Consequently, a sophisticated weakly and semi-supervised training scenario is introduced with appropriate training loss selection. Experimental results show that the proposed method successfully localizes and classifies masses while requiring less effort in annotation work. The influences of each component in the proposed framework are also validated by conducting an ablative analysis. Although the proposed method is intended for masses in BUS images, it can also be applied as a general framework to train computer-aided detection and diagnosis systems for a wide variety of image modalities, target organs, and diseases.

Viaarxiv icon