Alert button
Picture for Jae Young Lee

Jae Young Lee

Alert button

SimCol3D -- 3D Reconstruction during Colonoscopy Challenge

Jul 20, 2023
Anita Rau, Sophia Bano, Yueming Jin, Pablo Azagra, Javier Morlana, Edward Sanderson, Bogdan J. Matuszewski, Jae Young Lee, Dong-Jae Lee, Erez Posner, Netanel Frank, Varshini Elangovan, Sista Raviteja, Zhengwen Li, Jiquan Liu, Seenivasan Lalithkumar, Mobarakol Islam, Hongliang Ren, José M. M. Montiel, Danail Stoyanov

Figure 1 for SimCol3D -- 3D Reconstruction during Colonoscopy Challenge
Figure 2 for SimCol3D -- 3D Reconstruction during Colonoscopy Challenge
Figure 3 for SimCol3D -- 3D Reconstruction during Colonoscopy Challenge
Figure 4 for SimCol3D -- 3D Reconstruction during Colonoscopy Challenge

Colorectal cancer is one of the most common cancers in the world. While colonoscopy is an effective screening technique, navigating an endoscope through the colon to detect polyps is challenging. A 3D map of the observed surfaces could enhance the identification of unscreened colon tissue and serve as a training platform. However, reconstructing the colon from video footage remains unsolved due to numerous factors such as self-occlusion, reflective surfaces, lack of texture, and tissue deformation that limit feature-based methods. Learning-based approaches hold promise as robust alternatives, but necessitate extensive datasets. By establishing a benchmark, the 2022 EndoVis sub-challenge SimCol3D aimed to facilitate data-driven depth and pose prediction during colonoscopy. The challenge was hosted as part of MICCAI 2022 in Singapore. Six teams from around the world and representatives from academia and industry participated in the three sub-challenges: synthetic depth prediction, synthetic pose prediction, and real pose prediction. This paper describes the challenge, the submitted methods, and their results. We show that depth prediction in virtual colonoscopy is robustly solvable, while pose estimation remains an open research question.

Viaarxiv icon

Lightweight Monocular Depth Estimation via Token-Sharing Transformer

Jun 09, 2023
Dong-Jae Lee, Jae Young Lee, Hyounguk Shon, Eojindl Yi, Yeong-Hun Park, Sung-Sik Cho, Junmo Kim

Figure 1 for Lightweight Monocular Depth Estimation via Token-Sharing Transformer
Figure 2 for Lightweight Monocular Depth Estimation via Token-Sharing Transformer
Figure 3 for Lightweight Monocular Depth Estimation via Token-Sharing Transformer
Figure 4 for Lightweight Monocular Depth Estimation via Token-Sharing Transformer

Depth estimation is an important task in various robotics systems and applications. In mobile robotics systems, monocular depth estimation is desirable since a single RGB camera can be deployable at a low cost and compact size. Due to its significant and growing needs, many lightweight monocular depth estimation networks have been proposed for mobile robotics systems. While most lightweight monocular depth estimation methods have been developed using convolution neural networks, the Transformer has been gradually utilized in monocular depth estimation recently. However, massive parameters and large computational costs in the Transformer disturb the deployment to embedded devices. In this paper, we present a Token-Sharing Transformer (TST), an architecture using the Transformer for monocular depth estimation, optimized especially in embedded devices. The proposed TST utilizes global token sharing, which enables the model to obtain an accurate depth prediction with high throughput in embedded devices. Experimental results show that TST outperforms the existing lightweight monocular depth estimation methods. On the NYU Depth v2 dataset, TST can deliver depth maps up to 63.4 FPS in NVIDIA Jetson nano and 142.6 FPS in NVIDIA Jetson TX2, with lower errors than the existing methods. Furthermore, TST achieves real-time depth estimation of high-resolution images on Jetson TX2 with competitive results.

* ICRA 2023 
Viaarxiv icon

Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

Mar 21, 2023
Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Jaejun Yoo, Junmo Kim

Figure 1 for Fix the Noise: Disentangling Source Feature for Controllable Domain Translation
Figure 2 for Fix the Noise: Disentangling Source Feature for Controllable Domain Translation
Figure 3 for Fix the Noise: Disentangling Source Feature for Controllable Domain Translation
Figure 4 for Fix the Noise: Disentangling Source Feature for Controllable Domain Translation

Recent studies show strong generative performance in domain translation especially by using transfer learning techniques on the unconditional generator. However, the control between different domain features using a single model is still challenging. Existing methods often require additional models, which is computationally demanding and leads to unsatisfactory visual quality. In addition, they have restricted control steps, which prevents a smooth transition. In this paper, we propose a new approach for high-quality domain translation with better controllability. The key idea is to preserve source features within a disentangled subspace of a target feature space. This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model. Our extensive experiments show that the proposed method can produce more consistent and realistic images than previous works and maintain precise controllability over different levels of transformation. The code is available at https://github.com/LeeDongYeun/FixNoise.

* Accepted by CVPR 2023. The code is available at https://github.com/LeeDongYeun/FixNoise. Extended from arXiv:2204.14079 (AICC workshop at CVPR 2022) 
Viaarxiv icon

I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images

Jan 16, 2023
Jiwan Hur, Jae Young Lee, Jaehyun Choi, Junmo Kim

Figure 1 for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images
Figure 2 for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images
Figure 3 for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images
Figure 4 for I See-Through You: A Framework for Removing Foreground Occlusion in Both Sparse and Dense Light Field Images

Light field (LF) camera captures rich information from a scene. Using the information, the LF de-occlusion (LF-DeOcc) task aims to reconstruct the occlusion-free center view image. Existing LF-DeOcc studies mainly focus on the sparsely sampled (sparse) LF images where most of the occluded regions are visible in other views due to the large disparity. In this paper, we expand LF-DeOcc in more challenging datasets, densely sampled (dense) LF images, which are taken by a micro-lens-based portable LF camera. Due to the small disparity ranges of dense LF images, most of the background regions are invisible in any view. To apply LF-DeOcc in both LF datasets, we propose a framework, ISTY, which is defined and divided into three roles: (1) extract LF features, (2) define the occlusion, and (3) inpaint occluded regions. By dividing the framework into three specialized components according to the roles, the development and analysis can be easier. Furthermore, an explainable intermediate representation, an occlusion mask, can be obtained in the proposed framework. The occlusion mask is useful for comprehensive analysis of the model and other applications by manipulating the mask. In experiments, qualitative and quantitative results show that the proposed framework outperforms state-of-the-art LF-DeOcc methods in both sparse and dense LF datasets.

* WACV 2023 
Viaarxiv icon

Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN

May 02, 2022
Dongyeun Lee, Jae Young Lee, Doyeon Kim, Jaehyun Choi, Junmo Kim

Figure 1 for Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN
Figure 2 for Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN
Figure 3 for Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN
Figure 4 for Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN

Transfer learning of StyleGAN has recently shown great potential to solve diverse tasks, especially in domain translation. Previous methods utilized a source model by swapping or freezing weights during transfer learning, however, they have limitations on visual quality and controlling source features. In other words, they require additional models that are computationally demanding and have restricted control steps that prevent a smooth transition. In this paper, we propose a new approach to overcome these limitations. Instead of swapping or freezing, we introduce a simple feature matching loss to improve generation quality. In addition, to control the degree of source features, we train a target model with the proposed strategy, FixNoise, to preserve the source features only in a disentangled subspace of a target feature space. Owing to the disentangled feature space, our method can smoothly control the degree of the source features in a single model. Extensive experiments demonstrate that the proposed method can generate more consistent and realistic images than previous works.

* Accepted at CVPR 2022 Workshop on AI for Content Creation (AICC 2022) 
Viaarxiv icon

Integral Policy Iterations for Reinforcement Learning Problems in Continuous Time and Space

May 09, 2017
Jae Young Lee, Richard S. Sutton

Figure 1 for Integral Policy Iterations for Reinforcement Learning Problems in Continuous Time and Space
Figure 2 for Integral Policy Iterations for Reinforcement Learning Problems in Continuous Time and Space
Figure 3 for Integral Policy Iterations for Reinforcement Learning Problems in Continuous Time and Space
Figure 4 for Integral Policy Iterations for Reinforcement Learning Problems in Continuous Time and Space

Policy iteration (PI) is a recursive process of policy evaluation and improvement to solve an optimal decision-making, e.g., reinforcement learning (RL) or optimal control problem and has served as the fundamental to develop RL methods. Motivated by integral PI (IPI) schemes in optimal control and RL methods in continuous time and space (CTS), this paper proposes on-policy IPI to solve the general RL problem in CTS, with its environment modeled by an ordinary differential equation (ODE). In such continuous domain, we also propose four off-policy IPI methods---two are the ideal PI forms that use advantage and Q-functions, respectively, and the other two are natural extensions of the existing off-policy IPI schemes to our general RL framework. Compared to the IPI methods in optimal control, the proposed IPI schemes can be applied to more general situations and do not require an initial stabilizing policy to run; they are also strongly relevant to the RL algorithms in CTS such as advantage updating, Q-learning, and value-gradient based (VGB) greedy policy improvement. Our on-policy IPI is basically model-based but can be made partially model-free; each off-policy method is also either partially or completely model-free. The mathematical properties of the IPI methods---admissibility, monotone improvement, and convergence towards the optimal solution---are all rigorously proven, together with the equivalence of on- and off-policy IPI. Finally, the IPI methods are simulated with an inverted-pendulum model to support the theory and verify the performance.

* 20 pages, 2 figures (i.e., 6 sub-figures), 2 tables, 5 main ideal algorithms, and 1 algorithm for implementation. For a summary, a short simplified RLDM-conf. version is available at <http://web.yonsei.ac.kr/jyounglee/Conference/RLDM2017.PDF> 
Viaarxiv icon