Alert button
Picture for Qingjie Meng

Qingjie Meng

Alert button

DeepMesh: Mesh-based Cardiac Motion Tracking using Deep Learning

Sep 25, 2023
Qingjie Meng, Wenjia Bai, Declan P O'Regan, and Daniel Rueckert

3D motion estimation from cine cardiac magnetic resonance (CMR) images is important for the assessment of cardiac function and the diagnosis of cardiovascular diseases. Current state-of-the art methods focus on estimating dense pixel-/voxel-wise motion fields in image space, which ignores the fact that motion estimation is only relevant and useful within the anatomical objects of interest, e.g., the heart. In this work, we model the heart as a 3D mesh consisting of epi- and endocardial surfaces. We propose a novel learning framework, DeepMesh, which propagates a template heart mesh to a subject space and estimates the 3D motion of the heart mesh from CMR images for individual subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an individual subject is first reconstructed from the template mesh. Mesh-based 3D motion fields with respect to the end-diastolic frame are then estimated from 2D short- and long-axis CMR images. By developing a differentiable mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information from multiple anatomical views for 3D mesh reconstruction and mesh motion estimation. The proposed method estimates vertex-wise displacement and thus maintains vertex correspondences between time frames, which is important for the quantitative assessment of cardiac function across different subjects and populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank. We focus on 3D motion estimation of the left ventricle in this work. Experimental results show that the proposed method quantitatively and qualitatively outperforms other image-based and mesh-based cardiac motion tracking methods.

Viaarxiv icon

Mesh-based 3D Motion Tracking in Cardiac MRI using Deep Learning

Sep 05, 2022
Qingjie Meng, Wenjia Bai, Tianrui Liu, Declan P O'Regan, Daniel Rueckert

Figure 1 for Mesh-based 3D Motion Tracking in Cardiac MRI using Deep Learning
Figure 2 for Mesh-based 3D Motion Tracking in Cardiac MRI using Deep Learning
Figure 3 for Mesh-based 3D Motion Tracking in Cardiac MRI using Deep Learning
Figure 4 for Mesh-based 3D Motion Tracking in Cardiac MRI using Deep Learning

3D motion estimation from cine cardiac magnetic resonance (CMR) images is important for the assessment of cardiac function and diagnosis of cardiovascular diseases. Most of the previous methods focus on estimating pixel-/voxel-wise motion fields in the full image space, which ignore the fact that motion estimation is mainly relevant and useful within the object of interest, e.g., the heart. In this work, we model the heart as a 3D geometric mesh and propose a novel deep learning-based method that can estimate 3D motion of the heart mesh from 2D short- and long-axis CMR images. By developing a differentiable mesh-to-image rasterizer, the method is able to leverage the anatomical shape information from 2D multi-view CMR images for 3D motion estimation. The differentiability of the rasterizer enables us to train the method end-to-end. One advantage of the proposed method is that by tracking the motion of each vertex, it is able to keep the vertex correspondence of 3D meshes between time frames, which is important for quantitative assessment of the cardiac function on the mesh. We evaluate the proposed method on CMR images acquired from the UK Biobank study. Experimental results show that the proposed method quantitatively and qualitatively outperforms both conventional and learning-based cardiac motion tracking methods.

Viaarxiv icon

MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI

Jul 29, 2022
Qingjie Meng, Chen Qin, Wenjia Bai, Tianrui Liu, Antonio de Marvao, Declan P O'Regan, Daniel Rueckert

Figure 1 for MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI
Figure 2 for MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI
Figure 3 for MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI
Figure 4 for MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI

Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.

Viaarxiv icon

Video Summarization through Reinforcement Learning with a 3D Spatio-Temporal U-Net

Jun 19, 2021
Tianrui Liu, Qingjie Meng, Jun-Jie Huang, Athanasios Vlontzos, Daniel Rueckert, Bernhard Kainz

Figure 1 for Video Summarization through Reinforcement Learning with a 3D Spatio-Temporal U-Net
Figure 2 for Video Summarization through Reinforcement Learning with a 3D Spatio-Temporal U-Net
Figure 3 for Video Summarization through Reinforcement Learning with a 3D Spatio-Temporal U-Net
Figure 4 for Video Summarization through Reinforcement Learning with a 3D Spatio-Temporal U-Net

Intelligent video summarization algorithms allow to quickly convey the most relevant information in videos through the identification of the most essential and explanatory content while removing redundant video frames. In this paper, we introduce the 3DST-UNet-RL framework for video summarization. A 3D spatio-temporal U-Net is used to efficiently encode spatio-temporal information of the input videos for downstream reinforcement learning (RL). An RL agent learns from spatio-temporal latent scores and predicts actions for keeping or rejecting a video frame in a video summary. We investigate if real/inflated 3D spatio-temporal CNN features are better suited to learn representations from videos than commonly used 2D image features. Our framework can operate in both, a fully unsupervised mode and a supervised training mode. We analyse the impact of prescribed summary lengths and show experimental evidence for the effectiveness of 3DST-UNet-RL on two commonly used general video summarization benchmarks. We also applied our method on a medical video summarization task. The proposed video summarization method has the potential to save storage costs of ultrasound screening videos as well as to increase efficiency when browsing patient video data during retrospective analysis or audit without loosing essential information

Viaarxiv icon

Mutual Information-based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging

Oct 30, 2020
Qingjie Meng, Jacqueline Matthew, Veronika A. Zimmer, Alberto Gomez, David F. A. Lloyd, Daniel Rueckert, Bernhard Kainz

Figure 1 for Mutual Information-based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging
Figure 2 for Mutual Information-based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging
Figure 3 for Mutual Information-based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging
Figure 4 for Mutual Information-based Disentangled Neural Networks for Classifying Unseen Categories in Different Domains: Application to Fetal Ultrasound Imaging

Deep neural networks exhibit limited generalizability across images with different entangled domain features and categorical features. Learning generalizable features that can form universal categorical decision boundaries across domains is an interesting and difficult challenge. This problem occurs frequently in medical imaging applications when attempts are made to deploy and improve deep learning models across different image acquisition devices, across acquisition parameters or if some classes are unavailable in new training databases. To address this problem, we propose Mutual Information-based Disentangled Neural Networks (MIDNet), which extract generalizable categorical features to transfer knowledge to unseen categories in a target domain. The proposed MIDNet adopts a semi-supervised learning paradigm to alleviate the dependency on labeled data. This is important for real-world applications where data annotation is time-consuming, costly and requires training and expertise. We extensively evaluate the proposed method on fetal ultrasound datasets for two different image classification tasks where domain features are respectively defined by shadow artifacts and image acquisition devices. Experimental results show that the proposed method outperforms the state-of-the-art on the classification of unseen categories in a target domain with sparsely labeled training data.

* arXiv admin note: substantial text overlap with arXiv:2003.00321 
Viaarxiv icon

Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment

Aug 19, 2020
Qingjie Meng, Daniel Rueckert, Bernhard Kainz

Figure 1 for Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment
Figure 2 for Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment
Figure 3 for Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment
Figure 4 for Unsupervised Cross-domain Image Classification by Distance Metric Guided Feature Alignment

Learning deep neural networks that are generalizable across different domains remains a challenge due to the problem of domain shift. Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain without using any labels in the target domain. Contemporary techniques focus on extracting domain-invariant features using domain adversarial training. However, these techniques neglect to learn discriminative class boundaries in the latent representation space on a target domain and yield limited adaptation performance. To address this problem, we propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains. The proposed MetFA method explicitly and directly learns the latent representation without using domain adversarial training. Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain. We evaluate the proposed method on fetal ultrasound datasets for cross-device image classification. Experimental results demonstrate that the proposed method outperforms the state-of-the-art and enables model generalization.

Viaarxiv icon

Automated Detection of Congenital Heart Disease in Fetal Ultrasound Screening

Aug 18, 2020
Jeremy Tan, Anselm Au, Qingjie Meng, Sandy FinesilverSmith, John Simpson, Daniel Rueckert, Reza Razavi, Thomas Day, David Lloyd, Bernhard Kainz

Figure 1 for Automated Detection of Congenital Heart Disease in Fetal Ultrasound Screening
Figure 2 for Automated Detection of Congenital Heart Disease in Fetal Ultrasound Screening
Figure 3 for Automated Detection of Congenital Heart Disease in Fetal Ultrasound Screening
Figure 4 for Automated Detection of Congenital Heart Disease in Fetal Ultrasound Screening

Prenatal screening with ultrasound can lower neonatal mortality significantly for selected cardiac abnormalities. However, the need for human expertise, coupled with the high volume of screening cases, limits the practically achievable detection rates. In this paper we discuss the potential for deep learning techniques to aid in the detection of congenital heart disease (CHD) in fetal ultrasound. We propose a pipeline for automated data curation and classification. During both training and inference, we exploit an auxiliary view classification task to bias features toward relevant cardiac structures. This bias helps to improve in F1-scores from 0.72 and 0.77 to 0.87 and 0.85 for healthy and CHD classes respectively.

Viaarxiv icon

Automated Detection of Congenital HeartDisease in Fetal Ultrasound Screening

Aug 16, 2020
Jeremy Tan, Anselm Au, Qingjie Meng, Sandy FinesilverSmith, John Simpson, Daniel Rueckert, Reza Razavi, Thomas Day, David Lloyd, Bernhard Kainz

Figure 1 for Automated Detection of Congenital HeartDisease in Fetal Ultrasound Screening
Figure 2 for Automated Detection of Congenital HeartDisease in Fetal Ultrasound Screening
Figure 3 for Automated Detection of Congenital HeartDisease in Fetal Ultrasound Screening
Figure 4 for Automated Detection of Congenital HeartDisease in Fetal Ultrasound Screening

Prenatal screening with ultrasound can lower neonatal mor-tality significantly for selected cardiac abnormalities. However, the needfor human expertise, coupled with the high volume of screening cases,limits the practically achievable detection rates. In this paper we discussthe potential for deep learning techniques to aid in the detection of con-genital heart disease (CHD) in fetal ultrasound. We propose a pipelinefor automated data curation and classification. During both training andinference, we exploit an auxiliary view classification task to bias featurestoward relevant cardiac structures. This bias helps to improve in F1-scores from 0.72 and 0.77 to 0.87 and 0.85 for healthy and CHD classesrespectively.

Viaarxiv icon

Ultrasound Video Summarization using Deep Reinforcement Learning

May 19, 2020
Tianrui Liu, Qingjie Meng, Athanasios Vlontzos, Jeremy Tan, Daniel Rueckert, Bernhard Kainz

Figure 1 for Ultrasound Video Summarization using Deep Reinforcement Learning
Figure 2 for Ultrasound Video Summarization using Deep Reinforcement Learning
Figure 3 for Ultrasound Video Summarization using Deep Reinforcement Learning
Figure 4 for Ultrasound Video Summarization using Deep Reinforcement Learning

Video is an essential imaging modality for diagnostics, e.g. in ultrasound imaging, for endoscopy, or movement assessment. However, video hasn't received a lot of attention in the medical image analysis community. In the clinical practice, it is challenging to utilise raw diagnostic video data efficiently as video data takes a long time to process, annotate or audit. In this paper we introduce a novel, fully automatic video summarization method that is tailored to the needs of medical video data. Our approach is framed as reinforcement learning problem and produces agents focusing on the preservation of important diagnostic information. We evaluate our method on videos from fetal ultrasound screening, where commonly only a small amount of the recorded data is used diagnostically. We show that our method is superior to alternative video summarization methods and that it preserves essential information required by clinical diagnostic standards.

* Accepted by MICCAI'20 
Viaarxiv icon

Learning Cross-domain Generalizable Features by Representation Disentanglement

Feb 29, 2020
Qingjie Meng, Daniel Rueckert, Bernhard Kainz

Figure 1 for Learning Cross-domain Generalizable Features by Representation Disentanglement
Figure 2 for Learning Cross-domain Generalizable Features by Representation Disentanglement
Figure 3 for Learning Cross-domain Generalizable Features by Representation Disentanglement
Figure 4 for Learning Cross-domain Generalizable Features by Representation Disentanglement

Deep learning models exhibit limited generalizability across different domains. Specifically, transferring knowledge from available entangled domain features(source/target domain) and categorical features to new unseen categorical features in a target domain is an interesting and difficult problem that is rarely discussed in the current literature. This problem is essential for many real-world applications such as improving diagnostic classification or prediction in medical imaging. To address this problem, we propose Mutual-Information-based Disentangled Neural Networks (MIDNet) to extract generalizable features that enable transferring knowledge to unseen categorical features in target domains. The proposed MIDNet is developed as a semi-supervised learning paradigm to alleviate the dependency on labeled data. This is important for practical applications where data annotation requires rare expertise as well as intense time and labor. We demonstrate our method on handwritten digits datasets and a fetal ultrasound dataset for image classification tasks. Experiments show that our method outperforms the state-of-the-art and achieve expected performance with sparsely labeled data.

Viaarxiv icon