Alert button
Picture for Il Yong Chun

Il Yong Chun

Alert button

End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data

Aug 28, 2023
Jin Bok Park, Jinkyu Lee, Muhyun Back, Hyunmin Han, David T. Ma, Sang Min Won, Sung Soo Hwang, Il Yong Chun

Figure 1 for End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data
Figure 2 for End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data
Figure 3 for End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data
Figure 4 for End-to-End Driving via Self-Supervised Imitation Learning Using Camera and LiDAR Data

In autonomous driving, the end-to-end (E2E) driving approach that predicts vehicle control signals directly from sensor data is rapidly gaining attention. To learn a safe E2E driving system, one needs an extensive amount of driving data and human intervention. Vehicle control data is constructed by many hours of human driving, and it is challenging to construct large vehicle control datasets. Often, publicly available driving datasets are collected with limited driving scenes, and collecting vehicle control data is only available by vehicle manufacturers. To address these challenges, this paper proposes the first self-supervised learning framework, self-supervised imitation learning (SSIL), that can learn E2E driving networks without using driving command data. To construct pseudo steering angle data, proposed SSIL predicts a pseudo target from the vehicle's poses at the current and previous time points that are estimated with light detection and ranging sensors. Our numerical experiments demonstrate that the proposed SSIL framework achieves comparable E2E driving accuracy with the supervised learning counterpart. In addition, our qualitative analyses using a conventional visual explanation tool show that trained NNs by proposed SSIL and the supervision counterpart attend similar objects in making predictions.

* 20 pages, 8 figures 
Viaarxiv icon

Self-supervised regression learning using domain knowledge: Applications to improving self-supervised denoising in imaging

May 10, 2022
Il Yong Chun, Dongwon Park, Xuehang Zheng, Se Young Chun, Yong Long

Figure 1 for Self-supervised regression learning using domain knowledge: Applications to improving self-supervised denoising in imaging
Figure 2 for Self-supervised regression learning using domain knowledge: Applications to improving self-supervised denoising in imaging
Figure 3 for Self-supervised regression learning using domain knowledge: Applications to improving self-supervised denoising in imaging
Figure 4 for Self-supervised regression learning using domain knowledge: Applications to improving self-supervised denoising in imaging

Regression that predicts continuous quantity is a central part of applications using computational imaging and computer vision technologies. Yet, studying and understanding self-supervised learning for regression tasks - except for a particular regression task, image denoising - have lagged behind. This paper proposes a general self-supervised regression learning (SSRL) framework that enables learning regression neural networks with only input data (but without ground-truth target data), by using a designable pseudo-predictor that encapsulates domain knowledge of a specific application. The paper underlines the importance of using domain knowledge by showing that under different settings, the better pseudo-predictor can lead properties of SSRL closer to those of ordinary supervised learning. Numerical experiments for low-dose computational tomography denoising and camera image denoising demonstrate that proposed SSRL significantly improves the denoising quality over several existing self-supervised denoising methods.

* 17 pages, 16 figures, 2 tables, submitted to IEEE T-IP 
Viaarxiv icon

Accelerated MRI With Deep Linear Convolutional Transform Learning

Apr 17, 2022
Hongyi Gu, Burhaneddin Yaman, Steen Moeller, Il Yong Chun, Mehmet Akçakaya

Figure 1 for Accelerated MRI With Deep Linear Convolutional Transform Learning
Figure 2 for Accelerated MRI With Deep Linear Convolutional Transform Learning
Figure 3 for Accelerated MRI With Deep Linear Convolutional Transform Learning
Figure 4 for Accelerated MRI With Deep Linear Convolutional Transform Learning

Recent studies show that deep learning (DL) based MRI reconstruction outperforms conventional methods, such as parallel imaging and compressed sensing (CS), in multiple applications. Unlike CS that is typically implemented with pre-determined linear representations for regularization, DL inherently uses a non-linear representation learned from a large database. Another line of work uses transform learning (TL) to bridge the gap between these two approaches by learning linear representations from data. In this work, we combine ideas from CS, TL and DL reconstructions to learn deep linear convolutional transforms as part of an algorithm unrolling approach. Using end-to-end training, our results show that the proposed technique can reconstruct MR images to a level comparable to DL methods, while supporting uniform undersampling patterns unlike conventional CS methods. Our proposed method relies on convex sparse image reconstruction with linear representation at inference time, which may be beneficial for characterizing robustness, stability and generalizability.

* To be published in 2022 44th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) 
Viaarxiv icon

Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames

Apr 30, 2021
Jinkyu Lee, Muhyun Back, Sung Soo Hwang, Il Yong Chun

Figure 1 for Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames
Figure 2 for Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames
Figure 3 for Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames
Figure 4 for Improved Real-Time Monocular SLAM Using Semantic Segmentation on Selective Frames

Monocular simultaneous localization and mapping (SLAM) is emerging in advanced driver assistance systems and autonomous driving, because a single camera is cheap and easy to install. Conventional monocular SLAM has two major challenges leading inaccurate localization and mapping. First, it is challenging to estimate scales in localization and mapping. Second, conventional monocular SLAM uses inappropriate mapping factors such as dynamic objects and low-parallax ares in mapping. This paper proposes an improved real-time monocular SLAM that resolves the aforementioned challenges by efficiently using deep learning-based semantic segmentation. To achieve the real-time execution of the proposed method, we apply semantic segmentation only to downsampled keyframes in parallel with mapping processes. In addition, the proposed method corrects scales of camera poses and three-dimensional (3D) points, using estimated ground plane from road-labeled 3D points and the real camera height. The proposed method also removes inappropriate corner features labeled as moving objects and low parallax areas. Experiments with six video sequences demonstrate that the proposed monocular SLAM system achieves significantly more accurate trajectory tracking accuracy compared to state-of-the-art monocular SLAM and comparable trajectory tracking accuracy compared to state-of-the-art stereo SLAM.

Viaarxiv icon

Improved and efficient inter-vehicle distance estimation using road gradients of both ego and target vehicles

Apr 01, 2021
Muhyun Back, Jinkyu Lee, Kyuho Bae, Sung Soo Hwang, Il Yong Chun

Figure 1 for Improved and efficient inter-vehicle distance estimation using road gradients of both ego and target vehicles
Figure 2 for Improved and efficient inter-vehicle distance estimation using road gradients of both ego and target vehicles
Figure 3 for Improved and efficient inter-vehicle distance estimation using road gradients of both ego and target vehicles
Figure 4 for Improved and efficient inter-vehicle distance estimation using road gradients of both ego and target vehicles

In advanced driver assistant systems and autonomous driving, it is crucial to estimate distances between an ego vehicle and target vehicles. Existing inter-vehicle distance estimation methods assume that the ego and target vehicles drive on a same ground plane. In practical driving environments, however, they may drive on different ground planes. This paper proposes an inter-vehicle distance estimation framework that can consider slope changes of a road forward, by estimating road gradients of \emph{both} ego vehicle and target vehicles and using a 2D object detection deep net. Numerical experiments demonstrate that the proposed method significantly improves the distance estimation accuracy and time complexity, compared to deep learning-based depth estimation methods.

* 5 pages, 3 figures, 2 tables, submitted to IEEE ICAS 2021 
Viaarxiv icon

An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT

Dec 02, 2020
Zhipeng Li, Yong Long, Il Yong Chun

Figure 1 for An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT
Figure 2 for An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT
Figure 3 for An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT
Figure 4 for An Improved Iterative Neural Network for High-Quality Image-Domain Material Decomposition in Dual-Energy CT

Dual-energy computed tomography (DECT) has been widely used in many applications that need material decomposition. Image-domain methods directly decompose material images from high- and low-energy attenuation images, and thus, are susceptible to noise and artifacts on attenuation images. To obtain high-quality material images, various data-driven methods have been proposed. Iterative neural network (INN) methods combine regression NNs and model-based image reconstruction algorithm. INNs reduced the generalization error of (noniterative) deep regression NNs, and achieved high-quality reconstruction in diverse medical imaging applications. BCD-Net is a recent INN architecture that incorporates imaging refining NNs into the block coordinate descent (BCD) model-based image reconstruction algorithm. We propose a new INN architecture, distinct cross-material BCD-Net, for DECT material decomposition. The proposed INN architecture uses distinct cross-material convolutional neural network (CNN) in image refining modules, and uses image decomposition physics in image reconstruction modules. The distinct cross-material CNN refiners incorporate distinct encoding-decoding filters and cross-material model that captures correlations between different materials. We interpret the distinct cross-material CNN refiner with patch perspective. Numerical experiments with extended cardiactorso (XCAT) phantom and clinical data show that proposed distinct cross-material BCD-Net significantly improves the image quality over several image-domain material decomposition methods, including a conventional model-based image decomposition (MBID) method using an edge-preserving regularizer, a state-of-the-art MBID method using pre-learned material-wise sparsifying transforms, and a noniterative deep CNN denoiser.

Viaarxiv icon

Momentum-Net for Low-Dose CT Image Reconstruction

Mar 06, 2020
Siqi Ye, Yong Long, Il Yong Chun

Figure 1 for Momentum-Net for Low-Dose CT Image Reconstruction
Figure 2 for Momentum-Net for Low-Dose CT Image Reconstruction
Figure 3 for Momentum-Net for Low-Dose CT Image Reconstruction
Figure 4 for Momentum-Net for Low-Dose CT Image Reconstruction

This paper applies the recent fast iterative neural network framework, Momentum-Net, using appropriate models to low-dose X-ray computed tomography (LDCT) image reconstruction. At each layer of the proposed Momentum-Net, the model-based image reconstruction module solves the majorized penalized weighted least-square problem, and the image refining module uses a four-layer convolutional autoencoder. Experimental results with the NIH AAPM-Mayo Clinic Low Dose CT Grand Challenge dataset show that the proposed Momentum-Net architecture significantly improves image reconstruction accuracy, compared to a state-of-the-art noniterative image denoising deep neural network (NN), WavResNet (in LDCT). We also investigated the spectral normalization technique that applies to image refining NN learning to satisfy the nonexpansive NN property; however, experimental results show that this does not improve the image reconstruction performance of Momentum-Net.

* Five pages author-submitted paper to ICIP 2020 
Viaarxiv icon

Momentum-Net: Fast and convergent iterative neural network for inverse problems

Sep 11, 2019
Il Yong Chun, Zhengyu Huang, Hongki Lim, Jeffrey A. Fessler

Figure 1 for Momentum-Net: Fast and convergent iterative neural network for inverse problems
Figure 2 for Momentum-Net: Fast and convergent iterative neural network for inverse problems
Figure 3 for Momentum-Net: Fast and convergent iterative neural network for inverse problems
Figure 4 for Momentum-Net: Fast and convergent iterative neural network for inverse problems

Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentums and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each layer by using majorizers, where each layer of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the spectral radius of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.

* 26 pages, 7 figures, 3 algorithms, 3 tables, fixed incorrect ref 
Viaarxiv icon

BCD-Net for Low-dose CT Reconstruction: Acceleration, Convergence, and Generalization

Aug 04, 2019
Il Yong Chun, Xuehang Zheng, Yong Long, Jeffrey A. Fessler

Figure 1 for BCD-Net for Low-dose CT Reconstruction: Acceleration, Convergence, and Generalization
Figure 2 for BCD-Net for Low-dose CT Reconstruction: Acceleration, Convergence, and Generalization
Figure 3 for BCD-Net for Low-dose CT Reconstruction: Acceleration, Convergence, and Generalization

Obtaining accurate and reliable images from low-dose computed tomography (CT) is challenging. Regression convolutional neural network (CNN) models that are learned from training data are increasingly gaining attention in low-dose CT reconstruction. This paper modifies the architecture of an iterative regression CNN, BCD-Net, for fast, stable, and accurate low-dose CT reconstruction, and presents the convergence property of the modified BCD-Net. Numerical results with phantom data show that applying faster numerical solvers to model-based image reconstruction (MBIR) modules of BCD-Net leads to faster and more accurate BCD-Net; BCD-Net significantly improves the reconstruction accuracy, compared to the state-of-the-art MBIR method using learned transforms; BCD-Net achieves better image quality, compared to a state-of-the-art iterative NN architecture, ADMM-Net. Numerical results with clinical data show that BCD-Net generalizes significantly better than a state-of-the-art deep (non-iterative) regression NN, FBPConvNet, that lacks MBIR modules.

* Accepted to MICCAI 2019, and the authors indicated by asterisks (*) equally contributed to this work 
Viaarxiv icon