Alert button
Picture for Joseph V. Hajnal

Joseph V. Hajnal

Alert button

An automated pipeline for quantitative T2* fetal body MRI and segmentation at low field

Aug 09, 2023
Kelly Payette, Alena Uus, Jordina Aviles Verdera, Carla Avena Zampieri, Megan Hall, Lisa Story, Maria Deprez, Mary A. Rutherford, Joseph V. Hajnal, Sebastien Ourselin, Raphael Tomi-Tricot, Jana Hutter

Figure 1 for An automated pipeline for quantitative T2* fetal body MRI and segmentation at low field
Figure 2 for An automated pipeline for quantitative T2* fetal body MRI and segmentation at low field
Figure 3 for An automated pipeline for quantitative T2* fetal body MRI and segmentation at low field
Figure 4 for An automated pipeline for quantitative T2* fetal body MRI and segmentation at low field

Fetal Magnetic Resonance Imaging at low field strengths is emerging as an exciting direction in perinatal health. Clinical low field (0.55T) scanners are beneficial for fetal imaging due to their reduced susceptibility-induced artefacts, increased T2* values, and wider bore (widening access for the increasingly obese pregnant population). However, the lack of standard automated image processing tools such as segmentation and reconstruction hampers wider clinical use. In this study, we introduce a semi-automatic pipeline using quantitative MRI for the fetal body at low field strength resulting in fast and detailed quantitative T2* relaxometry analysis of all major fetal body organs. Multi-echo dynamic sequences of the fetal body were acquired and reconstructed into a single high-resolution volume using deformable slice-to-volume reconstruction, generating both structural and quantitative T2* 3D volumes. A neural network trained using a semi-supervised approach was created to automatically segment these fetal body 3D volumes into ten different organs (resulting in dice values > 0.74 for 8 out of 10 organs). The T2* values revealed a strong relationship with GA in the lungs, liver, and kidney parenchyma (R^2>0.5). This pipeline was used successfully for a wide range of GAs (17-40 weeks), and is robust to motion artefacts. Low field fetal MRI can be used to perform advanced MRI analysis, and is a viable option for clinical scanning.

* Accepted by MICCAI 2023 
Viaarxiv icon

Placenta Segmentation in Ultrasound Imaging: Addressing Sources of Uncertainty and Limited Field-of-View

Jun 29, 2022
Veronika A. Zimmer, Alberto Gomez, Emily Skelton, Robert Wright, Gavin Wheeler, Shujie Deng, Nooshin Ghavami, Karen Lloyd, Jacqueline Matthew, Bernhard Kainz, Daniel Rueckert, Joseph V. Hajnal, Julia A. Schnabel

Figure 1 for Placenta Segmentation in Ultrasound Imaging: Addressing Sources of Uncertainty and Limited Field-of-View
Figure 2 for Placenta Segmentation in Ultrasound Imaging: Addressing Sources of Uncertainty and Limited Field-of-View
Figure 3 for Placenta Segmentation in Ultrasound Imaging: Addressing Sources of Uncertainty and Limited Field-of-View
Figure 4 for Placenta Segmentation in Ultrasound Imaging: Addressing Sources of Uncertainty and Limited Field-of-View

Automatic segmentation of the placenta in fetal ultrasound (US) is challenging due to the (i) high diversity of placenta appearance, (ii) the restricted quality in US resulting in highly variable reference annotations, and (iii) the limited field-of-view of US prohibiting whole placenta assessment at late gestation. In this work, we address these three challenges with a multi-task learning approach that combines the classification of placental location (e.g., anterior, posterior) and semantic placenta segmentation in a single convolutional neural network. Through the classification task the model can learn from larger and more diverse datasets while improving the accuracy of the segmentation task in particular in limited training set conditions. With this approach we investigate the variability in annotations from multiple raters and show that our automatic segmentations (Dice of 0.86 for anterior and 0.83 for posterior placentas) achieve human-level performance as compared to intra- and inter-observer variability. Lastly, our approach can deliver whole placenta segmentation using a multi-view US acquisition pipeline consisting of three stages: multi-probe image acquisition, image fusion and image segmentation. This results in high quality segmentation of larger structures such as the placenta in US with reduced image artifacts which are beyond the field-of-view of single probes.

* 21 pages (18 + appendix), 13 figures (9 + appendix) 
Viaarxiv icon

Fetal MRI by robust deep generative prior reconstruction and diffeomorphic registration: application to gestational age prediction

Oct 29, 2021
Lucilio Cordero-Grande, Juan Enrique Ortuño-Fisac, Alena Uus, Maria Deprez, Andrés Santos, Joseph V. Hajnal, María Jesús Ledesma-Carbayo

Figure 1 for Fetal MRI by robust deep generative prior reconstruction and diffeomorphic registration: application to gestational age prediction
Figure 2 for Fetal MRI by robust deep generative prior reconstruction and diffeomorphic registration: application to gestational age prediction
Figure 3 for Fetal MRI by robust deep generative prior reconstruction and diffeomorphic registration: application to gestational age prediction
Figure 4 for Fetal MRI by robust deep generative prior reconstruction and diffeomorphic registration: application to gestational age prediction

Magnetic resonance imaging of whole fetal body and placenta is limited by different sources of motion affecting the womb. Usual scanning techniques employ single-shot multi-slice sequences where anatomical information in different slices may be subject to different deformations, contrast variations or artifacts. Volumetric reconstruction formulations have been proposed to correct for these factors, but they must accommodate a non-homogeneous and non-isotropic sampling, so regularization becomes necessary. Thus, in this paper we propose a deep generative prior for robust volumetric reconstructions integrated with a diffeomorphic volume to slice registration method. Experiments are performed to validate our contributions and compare with a state of the art method in a cohort of $72$ fetal datasets in the range of $20-36$ weeks gestational age. Results suggest improved image resolution and more accurate prediction of gestational age at scan when comparing to a state of the art reconstruction method. In addition, gestational age prediction results from our volumetric reconstructions compare favourably with existing brain-based approaches, with boosted accuracy when integrating information of organs other than the brain. Namely, a mean absolute error of $0.618$ weeks ($R^2=0.958$) is achieved when combining fetal brain and trunk information.

* 23 pages, 15 figures, 1 table 
Viaarxiv icon

Magnetization Transfer-Mediated MR Fingerprinting

Apr 06, 2021
Daniel J. West, Gastao Cruz, Rui P. A. G. Teixeira, Torben Schneider, Jacques-Donald Tournier, Joseph V. Hajnal, Claudia Prieto, Shaihan J. Malik

Figure 1 for Magnetization Transfer-Mediated MR Fingerprinting
Figure 2 for Magnetization Transfer-Mediated MR Fingerprinting
Figure 3 for Magnetization Transfer-Mediated MR Fingerprinting
Figure 4 for Magnetization Transfer-Mediated MR Fingerprinting

Purpose: Magnetization transfer (MT) and inhomogeneous MT (ihMT) contrasts are used in MRI to provide information about macromolecular tissue content. In particular, MT is sensitive to macromolecules and ihMT appears to be specific to myelinated tissue. This study proposes a technique to characterize MT and ihMT properties from a single acquisition, producing both semiquantitative contrast ratios, and quantitative parameter maps. Theory and Methods: Building upon previous work that uses multiband radiofrequency (RF) pulses to efficiently generate ihMT contrast, we propose a cyclic-steady-state approach that cycles between multiband and single-band pulses to boost the achieved contrast. Resultant time-variable signals are reminiscent of a magnetic resonance fingerprinting (MRF) acquisition, except that the signal fluctuations are entirely mediated by magnetization transfer effects. A dictionary-based low-rank inversion method is used to reconstruct the resulting images and to produce both semiquantitative MT ratio (MTR) and ihMT ratio (ihMTR) maps, as well as quantitative parameter estimates corresponding to an ihMT tissue model. Results: Phantom and in vivo brain data acquired at 1.5T demonstrate the expected contrast trends, with ihMTR maps showing contrast more specific to white matter (WM), as has been reported by others. Quantitative estimation of semisolid fraction and dipolar T1 was also possible and yielded measurements consistent with literature values in the brain. Conclusions: By cycling between multiband and single-band pulses, an entirely magnetization transfer mediated 'fingerprinting' method was demonstrated. This proof-of-concept approach can be used to generate semiquantitative maps and quantitatively estimate some macromolecular specific tissue parameters.

* 34 Pages and 15 Figures (Including Supporting Information), Submitted to Magnetic Resonance in Medicine (MRM) 
Viaarxiv icon

Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction

Dec 22, 2020
Chen Qin, Jinming Duan, Kerstin Hammernik, Jo Schlemper, Thomas Küstner, René Botnar, Claudia Prieto, Anthony N. Price, Joseph V. Hajnal, Daniel Rueckert

Figure 1 for Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction
Figure 2 for Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction
Figure 3 for Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction
Figure 4 for Complementary Time-Frequency Domain Networks for Dynamic Parallel MR Image Reconstruction

Purpose: To introduce a novel deep learning based approach for fast and high-quality dynamic multi-coil MR reconstruction by learning a complementary time-frequency domain network that exploits spatio-temporal correlations simultaneously from complementary domains. Theory and Methods: Dynamic parallel MR image reconstruction is formulated as a multi-variable minimisation problem, where the data is regularised in combined temporal Fourier and spatial (x-f) domain as well as in spatio-temporal image (x-t) domain. An iterative algorithm based on variable splitting technique is derived, which alternates among signal de-aliasing steps in x-f and x-t spaces, a closed-form point-wise data consistency step and a weighted coupling step. The iterative model is embedded into a deep recurrent neural network which learns to recover the image via exploiting spatio-temporal redundancies in complementary domains. Results: Experiments were performed on two datasets of highly undersampled multi-coil short-axis cardiac cine MRI scans. Results demonstrate that our proposed method outperforms the current state-of-the-art approaches both quantitatively and qualitatively. The proposed model can also generalise well to data acquired from a different scanner and data with pathologies that were not seen in the training set. Conclusion: The work shows the benefit of reconstructing dynamic parallel MRI in complementary time-frequency domains with deep neural networks. The method can effectively and robustly reconstruct high-quality images from highly undersampled dynamic multi-coil data ($16 \times$ and $24 \times$ yielding 15s and 10s scan times respectively) with fast reconstruction speed (2.8s). This could potentially facilitate achieving fast single-breath-hold clinical 2D cardiac cine imaging.

* In submission 
Viaarxiv icon

Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction

Sep 25, 2019
Jo Schlemper, Jinming Duan, Cheng Ouyang, Chen Qin, Jose Caballero, Joseph V. Hajnal, Daniel Rueckert

Figure 1 for Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction
Figure 2 for Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction
Figure 3 for Data consistency networks for (calibration-less) accelerated parallel MR image reconstruction

We present simple reconstruction networks for multi-coil data by extending deep cascade of CNN's and exploiting the data consistency layer. In particular, we propose two variants, where one is inspired by POCSENSE and the other is calibration-less. We show that the proposed approaches are competitive relative to the state of the art both quantitatively and qualitatively.

* Presented at ISMRM 27th Annual Meeting & Exhibition (Abstract #4663) 
Viaarxiv icon

dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance

Sep 25, 2019
Jo Schlemper, Ilkay Oksuz, James R. Clough, Jinming Duan, Andrew P. King, Julia A. Schnabel, Joseph V. Hajnal, Daniel Rueckert

Figure 1 for dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance
Figure 2 for dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance
Figure 3 for dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance
Figure 4 for dAUTOMAP: decomposing AUTOMAP to achieve scalability and enhance performance

AUTOMAP is a promising generalized reconstruction approach, however, it is not scalable and hence the practicality is limited. We present dAUTOMAP, a novel way for decomposing the domain transformation of AUTOMAP, making the model scale linearly. We show dAUTOMAP outperforms AUTOMAP with significantly fewer parameters.

* Presented at ISMRM 27th Annual Meeting & Exhibition (Abstract #658) 
Viaarxiv icon

Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging

Aug 28, 2019
Tong Zhang, Laurence H. Jackson, Alena Uus, James R. Clough, Lisa Story, Mary A. Rutherford, Joseph V. Hajnal, Maria Deprez

Figure 1 for Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging
Figure 2 for Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging
Figure 3 for Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging
Figure 4 for Self-supervised Recurrent Neural Network for 4D Abdominal and In-utero MR Imaging

Accurately estimating and correcting the motion artifacts are crucial for 3D image reconstruction of the abdominal and in-utero magnetic resonance imaging (MRI). The state-of-art methods are based on slice-to-volume registration (SVR) where multiple 2D image stacks are acquired in three orthogonal orientations. In this work, we present a novel reconstruction pipeline that only needs one orientation of 2D MRI scans and can reconstruct the full high-resolution image without masking or registration steps. The framework consists of two main stages: the respiratory motion estimation using a self-supervised recurrent neural network, which learns the respiratory signals that are naturally embedded in the asymmetry relationship of the neighborhood slices and cluster them according to a respiratory state. Then, we train a 3D deconvolutional network for super-resolution (SR) reconstruction of the sparsely selected 2D images using integrated reconstruction and total variation loss. We evaluate the classification accuracy on 5 simulated images and compare our results with the SVR method in adult abdominal and in-utero MRI scans. The results show that the proposed pipeline can accurately estimate the respiratory state and reconstruct 4D SR volumes with better or similar performance to the 3D SVR pipeline with less than 20\% sparsely selected slices. The method has great potential to transform the 4D abdominal and in-utero MRI in clinical practice.

* Accepted by MICCAI 2019 workshop on Machine Learning for Medical Image Reconstruction 
Viaarxiv icon

Generalising Deep Learning MRI Reconstruction across Different Domains

Jan 31, 2019
Cheng Ouyang, Jo Schlemper, Carlo Biffi, Gavin Seegoolam, Jose Caballero, Anthony N. Price, Joseph V. Hajnal, Daniel Rueckert

Figure 1 for Generalising Deep Learning MRI Reconstruction across Different Domains
Figure 2 for Generalising Deep Learning MRI Reconstruction across Different Domains
Figure 3 for Generalising Deep Learning MRI Reconstruction across Different Domains

We look into robustness of deep learning based MRI reconstruction when tested on unseen contrasts and organs. We then propose to generalise the network by training with large publicly-available natural image datasets with synthesised phase information to achieve high cross-domain reconstruction performance which is competitive with domain-specific training. To explain its generalisation mechanism, we have also analysed patch sets for different training datasets.

* Accepted for ISBI2019 as a 1-page abstract 
Viaarxiv icon

Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction

Oct 14, 2018
Chen Qin, Jo Schlemper, Jose Caballero, Anthony Price, Joseph V. Hajnal, Daniel Rueckert

Figure 1 for Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction
Figure 2 for Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction
Figure 3 for Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction
Figure 4 for Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction

Accelerating the data acquisition of dynamic magnetic resonance imaging (MRI) leads to a challenging ill-posed inverse problem, which has received great interest from both the signal processing and machine learning community over the last decades. The key ingredient to the problem is how to exploit the temporal correlation of the MR sequence to resolve the aliasing artefact. Traditionally, such observation led to a formulation of a non-convex optimisation problem, which were solved using iterative algorithms. Recently, however, deep learning based-approaches have gained significant popularity due to its ability to solve general inversion problems. In this work, we propose a unique, novel convolutional recurrent neural network (CRNN) architecture which reconstructs high quality cardiac MR images from highly undersampled k-space data by jointly exploiting the dependencies of the temporal sequences as well as the iterative nature of the traditional optimisation algorithms. In particular, the proposed architecture embeds the structure of the traditional iterative algorithms, efficiently modelling the recurrence of the iterative reconstruction stages by using recurrent hidden connections over such iterations. In addition, spatiotemporal dependencies are simultaneously learnt by exploiting bidirectional recurrent hidden connections across time sequences. The proposed algorithm is able to learn both the temporal dependency and the iterative reconstruction process effectively with only a very small number of parameters, while outperforming current MR reconstruction methods in terms of computational complexity, reconstruction accuracy and speed.

* Published in IEEE Transactions on Medical Imaging 
Viaarxiv icon