Alert button
Picture for Matthew Sinclair

Matthew Sinclair

Alert button

Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation

Jul 02, 2023
Jingjie Guo, Weitong Zhang, Matthew Sinclair, Daniel Rueckert, Chen Chen

Figure 1 for Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation
Figure 2 for Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation
Figure 3 for Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation
Figure 4 for Pay Attention to the Atlas: Atlas-Guided Test-Time Adaptation Method for Robust 3D Medical Image Segmentation

Convolutional neural networks (CNNs) often suffer from poor performance when tested on target data that differs from the training (source) data distribution, particularly in medical imaging applications where variations in imaging protocols across different clinical sites and scanners lead to different imaging appearances. However, re-accessing source training data for unsupervised domain adaptation or labeling additional test data for model fine-tuning can be difficult due to privacy issues and high labeling costs, respectively. To solve this problem, we propose a novel atlas-guided test-time adaptation (TTA) method for robust 3D medical image segmentation, called AdaAtlas. AdaAtlas only takes one single unlabeled test sample as input and adapts the segmentation network by minimizing an atlas-based loss. Specifically, the network is adapted so that its prediction after registration is aligned with the learned atlas in the atlas space, which helps to reduce anatomical segmentation errors at test time. In addition, different from most existing TTA methods which restrict the adaptation to batch normalization blocks in the segmentation network only, we further exploit the use of channel and spatial attention blocks for improved adaptability at test time. Extensive experiments on multiple datasets from different sites show that AdaAtlas with attention blocks adapted (AdaAtlas-Attention) achieves superior performance improvements, greatly outperforming other competitive TTA methods.

Viaarxiv icon

Image To Tree with Recursive Prompting

Jan 01, 2023
James Batten, Matthew Sinclair, Ben Glocker, Michiel Schaap

Figure 1 for Image To Tree with Recursive Prompting
Figure 2 for Image To Tree with Recursive Prompting
Figure 3 for Image To Tree with Recursive Prompting
Figure 4 for Image To Tree with Recursive Prompting

Extracting complex structures from grid-based data is a common key step in automated medical image analysis. The conventional solution to recovering tree-structured geometries typically involves computing the minimal cost path through intermediate representations derived from segmentation masks. However, this methodology has significant limitations in the context of projective imaging of tree-structured 3D anatomical data such as coronary arteries, since there are often overlapping branches in the 2D projection. In this work, we propose a novel approach to predicting tree connectivity structure which reformulates the task as an optimization problem over individual steps of a recursive process. We design and train a two-stage model which leverages the UNet and Transformer architectures and introduces an image-based prompting technique. Our proposed method achieves compelling results on a pair of synthetic datasets, and outperforms a shortest-path baseline.

* 12 pages, 5 figures 
Viaarxiv icon

CAS-Net: Conditional Atlas Generation and Brain Segmentation for Fetal MRI

May 17, 2022
Liu Li, Qiang Ma, Matthew Sinclair, Antonios Makropoulos, Joseph Hajnal, A. David Edwards, Bernhard Kainz, Daniel Rueckert, Amir Alansary

Fetal Magnetic Resonance Imaging (MRI) is used in prenatal diagnosis and to assess early brain development. Accurate segmentation of the different brain tissues is a vital step in several brain analysis tasks, such as cortical surface reconstruction and tissue thickness measurements. Fetal MRI scans, however, are prone to motion artifacts that can affect the correctness of both manual and automatic segmentation techniques. In this paper, we propose a novel network structure that can simultaneously generate conditional atlases and predict brain tissue segmentation, called CAS-Net. The conditional atlases provide anatomical priors that can constrain the segmentation connectivity, despite the heterogeneity of intensity values caused by motion or partial volume effects. The proposed method is trained and evaluated on 253 subjects from the developing Human Connectome Project (dHCP). The results demonstrate that the proposed method can generate conditional age-specific atlas with sharp boundary and shape variance. It also segment multi-category brain tissues for fetal MRI with a high overall Dice similarity coefficient (DSC) of $85.2\%$ for the selected 9 tissue labels.

Viaarxiv icon

Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps

Jul 06, 2021
Samuel Budd, Matthew Sinclair, Thomas Day, Athanasios Vlontzos, Jeremy Tan, Tianrui Liu, Jaqueline Matthew, Emily Skelton, John Simpson, Reza Razavi, Ben Glocker, Daniel Rueckert, Emma C. Robinson, Bernhard Kainz

Figure 1 for Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps
Figure 2 for Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps
Figure 3 for Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps
Figure 4 for Detecting Hypo-plastic Left Heart Syndrome in Fetal Ultrasound via Disease-specific Atlas Maps

Fetal ultrasound screening during pregnancy plays a vital role in the early detection of fetal malformations which have potential long-term health impacts. The level of skill required to diagnose such malformations from live ultrasound during examination is high and resources for screening are often limited. We present an interpretable, atlas-learning segmentation method for automatic diagnosis of Hypo-plastic Left Heart Syndrome (HLHS) from a single `4 Chamber Heart' view image. We propose to extend the recently introduced Image-and-Spatial Transformer Networks (Atlas-ISTN) into a framework that enables sensitising atlas generation to disease. In this framework we can jointly learn image segmentation, registration, atlas construction and disease prediction while providing a maximum level of clinical interpretability compared to direct image classification methods. As a result our segmentation allows diagnoses competitive with expert-derived manual diagnosis and yields an AUC-ROC of 0.978 (1043 cases for training, 260 for validation and 325 for testing).

* MICCAI'21 Main Conference 
Viaarxiv icon

Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks

Dec 18, 2020
Matthew Sinclair, Andreas Schuh, Karl Hahn, Kersten Petersen, Ying Bai, James Batten, Michiel Schaap, Ben Glocker

Figure 1 for Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks
Figure 2 for Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks
Figure 3 for Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks
Figure 4 for Atlas-ISTN: Joint Segmentation, Registration and Atlas Construction with Image-and-Spatial Transformer Networks

Deep learning models for semantic segmentation are able to learn powerful representations for pixel-wise predictions, but are sensitive to noise at test time and do not guarantee a plausible topology. Image registration models on the other hand are able to warp known topologies to target images as a means of segmentation, but typically require large amounts of training data, and have not widely been benchmarked against pixel-wise segmentation models. We propose Atlas-ISTN, a framework that jointly learns segmentation and registration on 2D and 3D image data, and constructs a population-derived atlas in the process. Atlas-ISTN learns to segment multiple structures of interest and to register the constructed, topologically consistent atlas labelmap to an intermediate pixel-wise segmentation. Additionally, Atlas-ISTN allows for test time refinement of the model's parameters to optimize the alignment of the atlas labelmap to an intermediate pixel-wise segmentation. This process both mitigates for noise in the target image that can result in spurious pixel-wise predictions, as well as improves upon the one-pass prediction of the model. Benefits of the Atlas-ISTN framework are demonstrated qualitatively and quantitatively on 2D synthetic data and 3D cardiac computed tomography and brain magnetic resonance image data, out-performing both segmentation and registration baseline models. Atlas-ISTN also provides inter-subject correspondence of the structures of interest, enabling population-level shape and motion analysis.

* 33 pages, 15 figures 
Viaarxiv icon

Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control

Jan 31, 2020
Esther Puyol Anton, Bram Ruijsink, Christian F. Baumgartner, Matthew Sinclair, Ender Konukoglu, Reza Razavi, Andrew P. King

Figure 1 for Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control
Figure 2 for Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control
Figure 3 for Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control
Figure 4 for Automated quantification of myocardial tissue characteristics from native T1 mapping using neural networks with Bayesian inference for uncertainty-based quality-control

Tissue characterisation with CMR parametric mapping has the potential to detect and quantify both focal and diffuse alterations in myocardial structure not assessable by late gadolinium enhancement. Native T1 mapping in particular has shown promise as a useful biomarker to support diagnostic, therapeutic and prognostic decision-making in ischaemic and non-ischaemic cardiomyopathies. Convolutional neural networks with Bayesian inference are a category of artificial neural networks which model the uncertainty of the network output. This study presents an automated framework for tissue characterisation from native ShMOLLI T1 mapping at 1.5T using a Probabilistic Hierarchical Segmentation (PHiSeg) network. In addition, we use the uncertainty information provided by the PHiSeg network in a novel automated quality control (QC) step to identify uncertain T1 values. The PHiSeg network and QC were validated against manual analysis on a cohort of the UK Biobank containing healthy subjects and chronic cardiomyopathy patients. We used the proposed method to obtain reference T1 ranges for the left ventricular myocardium in healthy subjects as well as common clinical cardiac conditions. T1 values computed from automatic and manual segmentations were highly correlated (r=0.97). Bland-Altman analysis showed good agreement between the automated and manual measurements. The average Dice metric was 0.84 for the left ventricular myocardium. The sensitivity of detection of erroneous outputs was 91%. Finally, T1 values were automatically derived from 14,683 CMR exams from the UK Biobank. The proposed pipeline allows for automatic analysis of myocardial native T1 mapping and includes a QC process to detect potentially erroneous results. T1 reference values were presented for healthy subjects and common clinical cardiac conditions from the largest cohort to date using T1-mapping images.

Viaarxiv icon

Confident Head Circumference Measurement from Ultrasound with Real-time Feedback for Sonographers

Aug 07, 2019
Samuel Budd, Matthew Sinclair, Bishesh Khanal, Jacqueline Matthew, David Lloyd, Alberto Gomez, Nicolas Toussaint, Emma Robinson, Bernhard Kainz

Figure 1 for Confident Head Circumference Measurement from Ultrasound with Real-time Feedback for Sonographers
Figure 2 for Confident Head Circumference Measurement from Ultrasound with Real-time Feedback for Sonographers
Figure 3 for Confident Head Circumference Measurement from Ultrasound with Real-time Feedback for Sonographers
Figure 4 for Confident Head Circumference Measurement from Ultrasound with Real-time Feedback for Sonographers

Manual estimation of fetal Head Circumference (HC) from Ultrasound (US) is a key biometric for monitoring the healthy development of fetuses. Unfortunately, such measurements are subject to large inter-observer variability, resulting in low early-detection rates of fetal abnormalities. To address this issue, we propose a novel probabilistic Deep Learning approach for real-time automated estimation of fetal HC. This system feeds back statistics on measurement robustness to inform users how confident a deep neural network is in evaluating suitable views acquired during free-hand ultrasound examination. In real-time scenarios, this approach may be exploited to guide operators to scan planes that are as close as possible to the underlying distribution of training images, for the purpose of improving inter-operator consistency. We train on free-hand ultrasound data from over 2000 subjects (2848 training/540 test) and show that our method is able to predict HC measurements within 1.81$\pm$1.65mm deviation from the ground truth, with 50% of the test images fully contained within the predicted confidence margins, and an average of 1.82$\pm$1.78mm deviation from the margin for the remaining cases that are not fully contained.

* Accepted at MICCAI 2019; Demo video available on Twitter (@sambuddinc) 
Viaarxiv icon

Weakly Supervised Estimation of Shadow Confidence Maps in Ultrasound Imaging

Nov 21, 2018
Qingjie Meng, Matthew Sinclair, Veronika Zimmer, Benjamin Hou, Martin Rajchl, Nicolas Toussaint, Alberto Gomez, James Housden, Jacqueline Matthew, Daniel Rueckert, Julia Schnabel, Bernhard Kainz

Figure 1 for Weakly Supervised Estimation of Shadow Confidence Maps in Ultrasound Imaging
Figure 2 for Weakly Supervised Estimation of Shadow Confidence Maps in Ultrasound Imaging
Figure 3 for Weakly Supervised Estimation of Shadow Confidence Maps in Ultrasound Imaging
Figure 4 for Weakly Supervised Estimation of Shadow Confidence Maps in Ultrasound Imaging

Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions is challenging because pixel-wise annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions, which is able to generate a dense shadow-focused confidence map. During training, a multi-task module for shadow segmentation is built to learn general shadow features according based image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is then established to extend the binary shadow segmentation to a reference confidence map. In addition, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This confidence estimation network is able to predict shadow confidence maps directly from input images during inference. We evaluate DICE, soft DICE, recall, precision, mean squared error and inter-class correlation to verify the effectiveness of our method. Our method outperforms the state-of-the-art qualitatively and quantitatively. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.

Viaarxiv icon

Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network

Oct 07, 2018
Yuanwei Li, Bishesh Khanal, Benjamin Hou, Amir Alansary, Juan J. Cerrolaza, Matthew Sinclair, Jacqueline Matthew, Chandni Gupta, Caroline Knight, Bernhard Kainz, Daniel Rueckert

Figure 1 for Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network
Figure 2 for Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network
Figure 3 for Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network
Figure 4 for Standard Plane Detection in 3D Fetal Ultrasound Using an Iterative Transformation Network

Standard scan plane detection in fetal brain ultrasound (US) forms a crucial step in the assessment of fetal development. In clinical settings, this is done by manually manoeuvring a 2D probe to the desired scan plane. With the advent of 3D US, the entire fetal brain volume containing these standard planes can be easily acquired. However, manual standard plane identification in 3D volume is labour-intensive and requires expert knowledge of fetal anatomy. We propose a new Iterative Transformation Network (ITN) for the automatic detection of standard planes in 3D volumes. ITN uses a convolutional neural network to learn the relationship between a 2D plane image and the transformation parameters required to move that plane towards the location/orientation of the standard plane in the 3D volume. During inference, the current plane image is passed iteratively to the network until it converges to the standard plane location. We explore the effect of using different transformation representations as regression outputs of ITN. Under a multi-task learning framework, we introduce additional classification probability outputs to the network to act as confidence measures for the regressed transformation parameters in order to further improve the localisation accuracy. When evaluated on 72 US volumes of fetal brain, our method achieves an error of 3.83mm/12.7 degrees and 3.80mm/12.6 degrees for the transventricular and transcerebellar planes respectively and takes 0.46s per plane. Source code is publicly available at https://github.com/yuanwei1989/plane-detection.

* LNCS 11070 (2018) 392-400  
* 8 pages, 2 figures, accepted for MICCAI 2018; Added link to source code 
Viaarxiv icon

Fast Multiple Landmark Localisation Using a Patch-based Iterative Network

Oct 07, 2018
Yuanwei Li, Amir Alansary, Juan J. Cerrolaza, Bishesh Khanal, Matthew Sinclair, Jacqueline Matthew, Chandni Gupta, Caroline Knight, Bernhard Kainz, Daniel Rueckert

Figure 1 for Fast Multiple Landmark Localisation Using a Patch-based Iterative Network
Figure 2 for Fast Multiple Landmark Localisation Using a Patch-based Iterative Network
Figure 3 for Fast Multiple Landmark Localisation Using a Patch-based Iterative Network
Figure 4 for Fast Multiple Landmark Localisation Using a Patch-based Iterative Network

We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59mm and a runtime of 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth. Source code is publicly available at https://github.com/yuanwei1989/landmark-detection.

* LNCS 11070 (2018) 563-571  
* 8 pages, 4 figures, Accepted for MICCAI 2018 
Viaarxiv icon