Alert button
Picture for Hui Ji

Hui Ji

Alert button

Synthesis of realistic fetal MRI with conditional Generative Adversarial Networks

Sep 20, 2022
Marina Fernandez Garcia, Rodrigo Gonzalez Laiz, Hui Ji, Kelly Payette, Andras Jakab

Figure 1 for Synthesis of realistic fetal MRI with conditional Generative Adversarial Networks
Figure 2 for Synthesis of realistic fetal MRI with conditional Generative Adversarial Networks
Figure 3 for Synthesis of realistic fetal MRI with conditional Generative Adversarial Networks
Figure 4 for Synthesis of realistic fetal MRI with conditional Generative Adversarial Networks

Fetal brain magnetic resonance imaging serves as an emerging modality for prenatal counseling and diagnosis in disorders affecting the brain. Machine learning based segmentation plays an important role in the quantification of brain development. However, a limiting factor is the lack of sufficiently large, labeled training data. Our study explored the application of SPADE, a conditional general adversarial network (cGAN), which learns the mapping from the label to the image space. The input to the network was super-resolution T2-weighted cerebral MRI data of 120 fetuses (gestational age range: 20-35 weeks, normal and pathological), which were annotated for 7 different tissue categories. SPADE networks were trained on 256*256 2D slices of the reconstructed volumes (image and label pairs) in each orthogonal orientation. To combine the generated volumes from each orientation into one image, a simple mean of the outputs of the three networks was taken. Based on the label maps only, we synthesized highly realistic images. However, some finer details, like small vessels were not synthesized. A structural similarity index (SSIM) of 0.972+-0.016 and correlation coefficient of 0.974+-0.008 were achieved. To demonstrate the capacity of the cGAN to create new anatomical variants, we artificially dilated the ventricles in the segmentation map and created synthetic MRI of different degrees of fetal hydrocephalus. cGANs, such as the SPADE algorithm, allow the generation of hypothetically unseen scenarios and anatomical configurations in the label space, which data in turn can be utilized for training various machine learning algorithms. In the future, this algorithm would be used for generating large, synthetic datasets representing fetal brain development. These datasets would potentially improve the performance of currently available segmentation networks.

Viaarxiv icon

Dataset-free Deep learning Method for Low-Dose CT Image Reconstruction

May 01, 2022
Qiaoqiao Ding, Hui Ji, Yuhui Quan, Xiaoqun Zhang

Figure 1 for Dataset-free Deep learning Method for Low-Dose CT Image Reconstruction
Figure 2 for Dataset-free Deep learning Method for Low-Dose CT Image Reconstruction
Figure 3 for Dataset-free Deep learning Method for Low-Dose CT Image Reconstruction
Figure 4 for Dataset-free Deep learning Method for Low-Dose CT Image Reconstruction

Low-dose CT (LDCT) imaging attracted a considerable interest for the reduction of the object's exposure to X-ray radiation. In recent years, supervised deep learning has been extensively studied for LDCT image reconstruction, which trains a network over a dataset containing many pairs of normal-dose and low-dose images. However, the challenge on collecting many such pairs in the clinical setup limits the application of such supervised-learning-based methods for LDCT image reconstruction in practice. Aiming at addressing the challenges raised by the collection of training dataset, this paper proposed a unsupervised deep learning method for LDCT image reconstruction, which does not require any external training data. The proposed method is built on a re-parametrization technique for Bayesian inference via deep network with random weights, combined with additional total variational (TV) regularization. The experiments show that the proposed method noticeably outperforms existing dataset-free image reconstruction methods on the test data.

Viaarxiv icon

Fetal Brain Tissue Annotation and Segmentation Challenge Results

Apr 20, 2022
Kelly Payette, Hongwei Li, Priscille de Dumast, Roxane Licandro, Hui Ji, Md Mahfuzur Rahman Siddiquee, Daguang Xu, Andriy Myronenko, Hao Liu, Yuchen Pei, Lisheng Wang, Ying Peng, Juanying Xie, Huiquan Zhang, Guiming Dong, Hao Fu, Guotai Wang, ZunHyan Rieu, Donghyeon Kim, Hyun Gi Kim, Davood Karimi, Ali Gholipour, Helena R. Torres, Bruno Oliveira, João L. Vilaça, Yang Lin, Netanell Avisdris, Ori Ben-Zvi, Dafna Ben Bashat, Lucas Fidon, Michael Aertsen, Tom Vercauteren, Daniel Sobotka, Georg Langs, Mireia Alenyà, Maria Inmaculada Villanueva, Oscar Camara, Bella Specktor Fadida, Leo Joskowicz, Liao Weibin, Lv Yi, Li Xuesong, Moona Mazher, Abdul Qayyum, Domenec Puig, Hamza Kebiri, Zelin Zhang, Xinyi Xu, Dan Wu, KuanLun Liao, YiXuan Wu, JinTai Chen, Yunzhi Xu, Li Zhao, Lana Vasung, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab

Figure 1 for Fetal Brain Tissue Annotation and Segmentation Challenge Results
Figure 2 for Fetal Brain Tissue Annotation and Segmentation Challenge Results
Figure 3 for Fetal Brain Tissue Annotation and Segmentation Challenge Results
Figure 4 for Fetal Brain Tissue Annotation and Segmentation Challenge Results

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, brainstem, deep grey matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.

* Results from FeTA Challenge 2021, held at MICCAI; Manuscript submitted 
Viaarxiv icon

Gaussian Kernel Mixture Network for Single Image Defocus Deblurring

Oct 31, 2021
Yuhui Quan, Zicong Wu, Hui Ji

Figure 1 for Gaussian Kernel Mixture Network for Single Image Defocus Deblurring
Figure 2 for Gaussian Kernel Mixture Network for Single Image Defocus Deblurring
Figure 3 for Gaussian Kernel Mixture Network for Single Image Defocus Deblurring
Figure 4 for Gaussian Kernel Mixture Network for Single Image Defocus Deblurring

Defocus blur is one kind of blur effects often seen in images, which is challenging to remove due to its spatially variant amount. This paper presents an end-to-end deep learning approach for removing defocus blur from a single image, so as to have an all-in-focus image for consequent vision tasks. First, a pixel-wise Gaussian kernel mixture (GKM) model is proposed for representing spatially variant defocus blur kernels in an efficient linear parametric form, with higher accuracy than existing models. Then, a deep neural network called GKMNet is developed by unrolling a fixed-point iteration of the GKM-based deblurring. The GKMNet is built on a lightweight scale-recurrent architecture, with a scale-recurrent attention module for estimating the mixing coefficients in GKM for defocus deblurring. Extensive experiments show that the GKMNet not only noticeably outperforms existing defocus deblurring methods, but also has its advantages in terms of model complexity and computational efficiency.

* Accepted by NeurIPS 2021 
Viaarxiv icon

A comparison of automatic multi-tissue segmentation methods of the human fetal brain using the FeTA Dataset

Oct 29, 2020
Kelly Payette, Priscille de Dumast, Hamza Kebiri, Ivan Ezhov, Johannes C. Paetzold, Suprosanna Shit, Asim Iqbal, Romesa Khan, Raimund Kottke, Patrice Grehten, Hui Ji, Levente Lanczi, Marianna Nagy, Monika Beresova, Thi Dao Nguyen, Giancarlo Natalucci, Theofanis Karayannis, Bjoern Menze, Meritxell Bach Cuadra, Andras Jakab

Figure 1 for A comparison of automatic multi-tissue segmentation methods of the human fetal brain using the FeTA Dataset
Figure 2 for A comparison of automatic multi-tissue segmentation methods of the human fetal brain using the FeTA Dataset
Figure 3 for A comparison of automatic multi-tissue segmentation methods of the human fetal brain using the FeTA Dataset
Figure 4 for A comparison of automatic multi-tissue segmentation methods of the human fetal brain using the FeTA Dataset

It is critical to quantitatively analyse the developing human fetal brain in order to fully understand neurodevelopment in both normal fetuses and those with congenital disorders. To facilitate this analysis, automatic multi-tissue fetal brain segmentation algorithms are needed, which in turn requires open databases of segmented fetal brains. Here we introduce a publicly available database of 50 manually segmented pathological and non-pathological fetal magnetic resonance brain volume reconstructions across a range of gestational ages (20 to 33 weeks) into 7 different tissue categories (external cerebrospinal fluid, grey matter, white matter, ventricles, cerebellum, deep grey matter, brainstem/spinal cord). In addition, we quantitatively evaluate the accuracy of several automatic multi-tissue segmentation algorithms of the developing human fetal brain. Four research groups participated, submitting a total of 10 algorithms, demonstrating the benefits the database for the development of automatic algorithms.

* Paper currently under review 
Viaarxiv icon

Deep Bilateral Retinex for Low-Light Image Enhancement

Jul 04, 2020
Jinxiu Liang, Yong Xu, Yuhui Quan, Jingwen Wang, Haibin Ling, Hui Ji

Figure 1 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 2 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 3 for Deep Bilateral Retinex for Low-Light Image Enhancement
Figure 4 for Deep Bilateral Retinex for Low-Light Image Enhancement

Low-light images, i.e. the images captured in low-light conditions, suffer from very poor visibility caused by low contrast, color distortion and significant measurement noise. Low-light image enhancement is about improving the visibility of low-light images. As the measurement noise in low-light images is usually significant yet complex with spatially-varying characteristic, how to handle the noise effectively is an important yet challenging problem in low-light image enhancement. Based on the Retinex decomposition of natural images, this paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise. The basic idea is to train a neural network to generate a set of pixel-wise operators for simultaneously predicting the noise and the illumination layer, where the operators are defined in the bilateral space. Such an integrated approach allows us to have an accurate prediction of the reflectance layer in the presence of significant spatially-varying measurement noise. Extensive experiments on several benchmark datasets have shown that the proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.

* 15 pages 
Viaarxiv icon

Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction

Dec 16, 2019
Jiulong Liu, Angelica I. Aviles-Rivero, Hui Ji, Carola-Bibiane Schönlieb

Figure 1 for Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction
Figure 2 for Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction
Figure 3 for Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction
Figure 4 for Rethinking Medical Image Reconstruction via Shape Prior, Going Deeper and Faster: Deep Joint Indirect Registration and Reconstruction

Indirect image registration is a promising technique to improve image reconstruction quality by providing a shape prior for the reconstruction task. In this paper, we propose a novel hybrid method that seeks to reconstruct high quality images from few measurements whilst requiring low computational cost. With this purpose, our framework intertwines indirect registration and reconstruction tasks is a single functional. It is based on two major novelties. Firstly, we introduce a model based on deep nets to solve the indirect registration problem, in which the inversion and registration mappings are recurrently connected through a fixed-point interaction based sparse optimisation. Secondly, we introduce specific inversion blocks, that use the explicit physical forward operator, to map the acquired measurements to the image reconstruction. We also introduce registration blocks based deep nets to predict the registration parameters and warp transformation accurately and efficiently. We demonstrate, through extensive numerical and visual experiments, that our framework outperforms significantly classic reconstruction schemes and other bi-task method; this in terms of both image quality and computational time. Finally, we show generalisation capabilities of our approach by demonstrating their performance on fast Magnetic Resonance Imaging (MRI), sparse view computed tomography (CT) and low dose CT with measurements much below the Nyquist limit.

Viaarxiv icon

Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data

Apr 15, 2019
Caoqiang Liu, Hui Ji, Anqi Qiu

Figure 1 for Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data
Figure 2 for Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data
Figure 3 for Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data
Figure 4 for Convolutional Neural Network on Semi-Regular Triangulated Meshes and its Application to Brain Image Data

We developed a convolution neural network (CNN) on semi-regular triangulated meshes whose vertices have 6 neighbours. The key blocks of the proposed CNN, including convolution and down-sampling, are directly defined in a vertex domain. By exploiting the ordering property of semi-regular meshes, the convolution is defined on a vertex domain with strong motivation from the spatial definition of classic convolution. Moreover, the down-sampling of a semi-regular mesh embedded in a 3D Euclidean space can achieve a down-sampling rate of 4, 16, 64, etc. We demonstrated the use of this vertex-based graph CNN for the classification of mild cognitive impairment (MCI) and Alzheimer's disease (AD) based on 3169 MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI). We compared the performance of the vertex-based graph CNN with that of the spectral graph CNN.

* conference 
Viaarxiv icon

Removing out-of-focus blur from a single image

Aug 28, 2018
Guodong Xu, Chaoqiang Liu, Hui Ji

Figure 1 for Removing out-of-focus blur from a single image
Figure 2 for Removing out-of-focus blur from a single image
Figure 3 for Removing out-of-focus blur from a single image
Figure 4 for Removing out-of-focus blur from a single image

Reproducing an all-in-focus image from an image with defocus regions is of practical value in many applications, eg, digital photography, and robotics. Using the output of some existing defocus map estimator, existing approaches first segment a de-focused image into multiple regions blurred by Gaussian kernels with different variance each, and then de-blur each region using the corresponding Gaussian kernel. In this paper, we proposed a blind deconvolution method specifically designed for removing defocus blurring from an image, by providing effective solutions to two critical problems: 1) suppressing the artifacts caused by segmentation error by introducing an additional variable regularized by weighted $\ell_0$-norm; and 2) more accurate defocus kernel estimation using non-parametric symmetry and low-rank based constraints on the kernel. The experiments on real datasets showed the advantages of the proposed method over existing ones, thanks to the effective treatments of the two important issues mentioned above during deconvolution.

Viaarxiv icon

Weighted total variation based convex clustering

Aug 28, 2018
Guodong Xu, Yu Xia, Hui Ji

Figure 1 for Weighted total variation based convex clustering
Figure 2 for Weighted total variation based convex clustering
Figure 3 for Weighted total variation based convex clustering
Figure 4 for Weighted total variation based convex clustering

Data clustering is a fundamental problem with a wide range of applications. Standard methods, eg the $k$-means method, usually require solving a non-convex optimization problem. Recently, total variation based convex relaxation to the $k$-means model has emerged as an attractive alternative for data clustering. However, the existing results on its exact clustering property, ie, the condition imposed on data so that the method can provably give correct identification of all cluster memberships, is only applicable to very specific data and is also much more restrictive than that of some other methods. This paper aims at the revisit of total variation based convex clustering, by proposing a weighted sum-of-$\ell_1$-norm relating convex model. Its exact clustering property established in this paper, in both deterministic and probabilistic context, is applicable to general data and is much sharper than the existing results. These results provided good insights to advance the research on convex clustering. Moreover, the experiments also demonstrated that the proposed convex model has better empirical performance when be compared to standard clustering methods, and thus it can see its potential in practice.

Viaarxiv icon