Alert button
Picture for Joseph Jacob

Joseph Jacob

Alert button

CenTime: Event-Conditional Modelling of Censoring in Survival Analysis

Sep 15, 2023
Ahmed H. Shahin, An Zhao, Alexander C. Whitehead, Daniel C. Alexander, Joseph Jacob, David Barber

Figure 1 for CenTime: Event-Conditional Modelling of Censoring in Survival Analysis
Figure 2 for CenTime: Event-Conditional Modelling of Censoring in Survival Analysis
Figure 3 for CenTime: Event-Conditional Modelling of Censoring in Survival Analysis
Figure 4 for CenTime: Event-Conditional Modelling of Censoring in Survival Analysis

Survival analysis is a valuable tool for estimating the time until specific events, such as death or cancer recurrence, based on baseline observations. This is particularly useful in healthcare to prognostically predict clinically important events based on patient data. However, existing approaches often have limitations; some focus only on ranking patients by survivability, neglecting to estimate the actual event time, while others treat the problem as a classification task, ignoring the inherent time-ordered structure of the events. Furthermore, the effective utilization of censored samples - training data points where the exact event time is unknown - is essential for improving the predictive accuracy of the model. In this paper, we introduce CenTime, a novel approach to survival analysis that directly estimates the time to event. Our method features an innovative event-conditional censoring mechanism that performs robustly even when uncensored data is scarce. We demonstrate that our approach forms a consistent estimator for the event model parameters, even in the absence of uncensored data. Furthermore, CenTime is easily integrated with deep learning models with no restrictions on batch size or the number of uncensored samples. We compare our approach with standard survival analysis methods, including the Cox proportional-hazard model and DeepHit. Our results indicate that CenTime offers state-of-the-art performance in predicting time-to-death while maintaining comparable ranking performance. Our implementation is publicly available at https://github.com/ahmedhshahin/CenTime.

Viaarxiv icon

A 3D deep learning classifier and its explainability when assessing coronary artery disease

Jul 29, 2023
Wing Keung Cheung, Jeremy Kalindjian, Robert Bell, Arjun Nair, Leon J. Menezes, Riyaz Patel, Simon Wan, Kacy Chou, Jiahang Chen, Ryo Torii, Rhodri H. Davies, James C. Moon, Daniel C. Alexander, Joseph Jacob

Early detection and diagnosis of coronary artery disease (CAD) could save lives and reduce healthcare costs. In this study, we propose a 3D Resnet-50 deep learning model to directly classify normal subjects and CAD patients on computed tomography coronary angiography images. Our proposed method outperforms a 2D Resnet-50 model by 23.65%. Explainability is also provided by using a Grad-GAM. Furthermore, we link the 3D CAD classification to a 2D two-class semantic segmentation for improved explainability and accurate abnormality localisation.

Viaarxiv icon

A data-centric deep learning approach to airway segmentation

Jul 29, 2023
Wing Keung Cheung, Ashkan Pakzad, Nesrin Mogulkoc, Sarah Needleman, Bojidar Rangelov, Eyjolfur Gudmundsson, An Zhao, Mariam Abbas, Davina McLaverty, Dimitrios Asimakopoulos, Robert Chapman, Recep Savas, Sam M Janes, Yipeng Hu, Daniel C. Alexander, John R Hurst, Joseph Jacob

The morphology and distribution of airway tree abnormalities enables diagnosis and disease characterisation across a variety of chronic respiratory conditions. In this regard, airway segmentation plays a critical role in the production of the outline of the entire airway tree to enable estimation of disease extent and severity. In this study, we propose a data-centric deep learning technique to segment the airway tree. The proposed technique utilises interpolation and image split to improve data usefulness and quality. Then, an ensemble learning strategy is implemented to aggregate the segmented airway trees at different scales. In terms of segmentation performance (dice similarity coefficient), our method outperforms the baseline model by 2.5% on average when a combined loss is used. Further, our proposed technique has a low GPU usage and high flexibility enabling it to be deployed on any 2D deep learning model.

Viaarxiv icon

Expectation Maximization Pseudo Labelling for Segmentation with Limited Annotations

May 02, 2023
Mou-Cheng Xu, Yukun Zhou, Chen Jin, Marius de Groot, Daniel C. Alexander, Neil P. Oxtoby, Yipeng Hu, Joseph Jacob

Figure 1 for Expectation Maximization Pseudo Labelling for Segmentation with Limited Annotations
Figure 2 for Expectation Maximization Pseudo Labelling for Segmentation with Limited Annotations
Figure 3 for Expectation Maximization Pseudo Labelling for Segmentation with Limited Annotations
Figure 4 for Expectation Maximization Pseudo Labelling for Segmentation with Limited Annotations

We study pseudo labelling and its generalisation for semi-supervised segmentation of medical images. Pseudo labelling has achieved great empirical successes in semi-supervised learning, by utilising raw inferences on unlabelled data as pseudo labels for self-training. In our paper, we build a connection between pseudo labelling and the Expectation Maximization algorithm which partially explains its empirical successes. We thereby realise that the original pseudo labelling is an empirical estimation of its underlying full formulation. Following this insight, we demonstrate the full generalisation of pseudo labels under Bayes' principle, called Bayesian Pseudo Labels. We then provide a variational approach to learn to approximate Bayesian Pseudo Labels, by learning a threshold to select good quality pseudo labels. In the rest of the paper, we demonstrate the applications of Pseudo Labelling and its generalisation Bayesian Psuedo Labelling in semi-supervised segmentation of medical images on: 1) 3D binary segmentation of lung vessels from CT volumes; 2) 2D multi class segmentation of brain tumours from MRI volumes; 3) 3D binary segmentation of brain tumours from MRI volumes. We also show that pseudo labels can enhance the robustness of the learnt representations.

* An extension of MICCAI 2022 Young Scientist Award finalist paper titled as Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation 
Viaarxiv icon

A hybrid CNN-RNN approach for survival analysis in a Lung Cancer Screening study

Mar 19, 2023
Yaozhi Lu, Shahab Aslani, An Zhao, Ahmed Shahin, David Barber, Mark Emberton, Daniel C. Alexander, Joseph Jacob

Figure 1 for A hybrid CNN-RNN approach for survival analysis in a Lung Cancer Screening study
Figure 2 for A hybrid CNN-RNN approach for survival analysis in a Lung Cancer Screening study
Figure 3 for A hybrid CNN-RNN approach for survival analysis in a Lung Cancer Screening study
Figure 4 for A hybrid CNN-RNN approach for survival analysis in a Lung Cancer Screening study

In this study, we present a hybrid CNN-RNN approach to investigate long-term survival of subjects in a lung cancer screening study. Subjects who died of cardiovascular and respiratory causes were identified whereby the CNN model was used to capture imaging features in the CT scans and the RNN model was used to investigate time series and thus global information. The models were trained on subjects who underwent cardiovascular and respiratory deaths and a control cohort matched to participant age, gender, and smoking history. The combined model can achieve an AUC of 0.76 which outperforms humans at cardiovascular mortality prediction. The corresponding F1 and Matthews Correlation Coefficient are 0.63 and 0.42 respectively. The generalisability of the model is further validated on an 'external' cohort. The same models were applied to survival analysis with the Cox Proportional Hazard model. It was demonstrated that incorporating the follow-up history can lead to improvement in survival prediction. The Cox neural network can achieve an IPCW C-index of 0.75 on the internal dataset and 0.69 on an external dataset. Delineating imaging features associated with long-term survival can help focus preventative interventions appropriately, particularly for under-recognised pathologies thereby potentially reducing patient morbidity.

Viaarxiv icon

Airway measurement by refinement of synthetic images improves mortality prediction in idiopathic pulmonary fibrosis

Aug 30, 2022
Ashkan Pakzad, Mou-Cheng Xu, Wing Keung Cheung, Marie Vermant, Tinne Goos, Laurens J De Sadeleer, Stijn E Verleden, Wim A Wuyts, John R Hurst, Joseph Jacob

Figure 1 for Airway measurement by refinement of synthetic images improves mortality prediction in idiopathic pulmonary fibrosis
Figure 2 for Airway measurement by refinement of synthetic images improves mortality prediction in idiopathic pulmonary fibrosis
Figure 3 for Airway measurement by refinement of synthetic images improves mortality prediction in idiopathic pulmonary fibrosis
Figure 4 for Airway measurement by refinement of synthetic images improves mortality prediction in idiopathic pulmonary fibrosis

Several chronic lung diseases, like idiopathic pulmonary fibrosis (IPF) are characterised by abnormal dilatation of the airways. Quantification of airway features on computed tomography (CT) can help characterise disease progression. Physics based airway measurement algorithms have been developed, but have met with limited success in part due to the sheer diversity of airway morphology seen in clinical practice. Supervised learning methods are also not feasible due to the high cost of obtaining precise airway annotations. We propose synthesising airways by style transfer using perceptual losses to train our model, Airway Transfer Network (ATN). We compare our ATN model with a state-of-the-art GAN-based network (simGAN) using a) qualitative assessment; b) assessment of the ability of ATN and simGAN based CT airway metrics to predict mortality in a population of 113 patients with IPF. ATN was shown to be quicker and easier to train than simGAN. ATN-based airway measurements were also found to be consistently stronger predictors of mortality than simGAN-derived airway metrics on IPF CTs. Airway synthesis by a transformation network that refines synthetic data using perceptual losses is a realistic alternative to GAN-based methods for clinical CT analyses of idiopathic pulmonary fibrosis. Our source code can be found at https://github.com/ashkanpakzad/ATN that is compatible with the existing open-source airway analysis framework, AirQuant.

* 11 Pages, 4 figures. Source code available: https://github.com/ashkanpakzad/ATN. Initial submission version, to be published in MICCAI Workshop on Deep Generative Models 2022 
Viaarxiv icon

Optimising Chest X-Rays for Image Analysis by Identifying and Removing Confounding Factors

Aug 22, 2022
Shahab Aslani, Watjana Lilaonitkul, Vaishnavi Gnanananthan, Divya Raj, Bojidar Rangelov, Alexandra L Young, Yipeng Hu, Paul Taylor, Daniel C Alexander, Joseph Jacob

Figure 1 for Optimising Chest X-Rays for Image Analysis by Identifying and Removing Confounding Factors
Figure 2 for Optimising Chest X-Rays for Image Analysis by Identifying and Removing Confounding Factors
Figure 3 for Optimising Chest X-Rays for Image Analysis by Identifying and Removing Confounding Factors
Figure 4 for Optimising Chest X-Rays for Image Analysis by Identifying and Removing Confounding Factors

During the COVID-19 pandemic, the sheer volume of imaging performed in an emergency setting for COVID-19 diagnosis has resulted in a wide variability of clinical CXR acquisitions. This variation is seen in the CXR projections used, image annotations added and in the inspiratory effort and degree of rotation of clinical images. The image analysis community has attempted to ease the burden on overstretched radiology departments during the pandemic by developing automated COVID-19 diagnostic algorithms, the input for which has been CXR imaging. Large publicly available CXR datasets have been leveraged to improve deep learning algorithms for COVID-19 diagnosis. Yet the variable quality of clinically-acquired CXRs within publicly available datasets could have a profound effect on algorithm performance. COVID-19 diagnosis may be inferred by an algorithm from non-anatomical features on an image such as image labels. These imaging shortcuts may be dataset-specific and limit the generalisability of AI systems. Understanding and correcting key potential biases in CXR images is therefore an essential first step prior to CXR image analysis. In this study, we propose a simple and effective step-wise approach to pre-processing a COVID-19 chest X-ray dataset to remove undesired biases. We perform ablation studies to show the impact of each individual step. The results suggest that using our proposed pipeline could increase accuracy of the baseline COVID-19 detection algorithm by up to 13%.

Viaarxiv icon

Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation

Aug 08, 2022
Mou-Cheng Xu, Yukun Zhou, Chen Jin, Marius de Groot, Daniel C. Alexander, Neil P. Oxtoby, Yipeng Hu, Joseph Jacob

Figure 1 for Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation
Figure 2 for Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation
Figure 3 for Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation
Figure 4 for Bayesian Pseudo Labels: Expectation Maximization for Robust and Efficient Semi-Supervised Segmentation

This paper concerns pseudo labelling in segmentation. Our contribution is fourfold. Firstly, we present a new formulation of pseudo-labelling as an Expectation-Maximization (EM) algorithm for clear statistical interpretation. Secondly, we propose a semi-supervised medical image segmentation method purely based on the original pseudo labelling, namely SegPL. We demonstrate SegPL is a competitive approach against state-of-the-art consistency regularisation based methods on semi-supervised segmentation on a 2D multi-class MRI brain tumour segmentation task and a 3D binary CT lung vessel segmentation task. The simplicity of SegPL allows less computational cost comparing to prior methods. Thirdly, we demonstrate that the effectiveness of SegPL may originate from its robustness against out-of-distribution noises and adversarial attacks. Lastly, under the EM framework, we introduce a probabilistic generalisation of SegPL via variational inference, which learns a dynamic threshold for pseudo labelling during the training. We show that SegPL with variational inference can perform uncertainty estimation on par with the gold-standard method Deep Ensemble.

* MICCAI 2022 (Early accept, Student Travel Award) 
Viaarxiv icon

Learning Morphological Feature Perturbations for Calibrated Semi-Supervised Segmentation

Apr 01, 2022
Mou-Cheng Xu, Yu-Kun Zhou, Chen Jin, Stefano B Blumberg, Frederick J Wilson, Marius deGroot, Daniel C. Alexander, Neil P. Oxtoby, Joseph Jacob

Figure 1 for Learning Morphological Feature Perturbations for Calibrated Semi-Supervised Segmentation
Figure 2 for Learning Morphological Feature Perturbations for Calibrated Semi-Supervised Segmentation
Figure 3 for Learning Morphological Feature Perturbations for Calibrated Semi-Supervised Segmentation
Figure 4 for Learning Morphological Feature Perturbations for Calibrated Semi-Supervised Segmentation

We propose MisMatch, a novel consistency-driven semi-supervised segmentation framework which produces predictions that are invariant to learnt feature perturbations. MisMatch consists of an encoder and a two-head decoders. One decoder learns positive attention to the foreground regions of interest (RoI) on unlabelled images thereby generating dilated features. The other decoder learns negative attention to the foreground on the same unlabelled images thereby generating eroded features. We then apply a consistency regularisation on the paired predictions. MisMatch outperforms state-of-the-art semi-supervised methods on a CT-based pulmonary vessel segmentation task and a MRI-based brain tumour segmentation task. In addition, we show that the effectiveness of MisMatch comes from better model calibration than its supervised learning counterpart.

* To appear at Conference on Medical Imaging with Deep Learning (MIDL) 2022. arXiv admin note: text overlap with arXiv:2110.12179 
Viaarxiv icon