Alert button
Picture for James H. F. Rudd

James H. F. Rudd

Alert button

on behalf of the AIX-COVNET collaboration

Reinterpreting survival analysis in the universal approximator age

Jul 25, 2023
Sören Dittmer, Michael Roberts, Jacobus Preller, AIX COVNET, James H. F. Rudd, John A. D. Aston, Carola-Bibiane Schönlieb

Survival analysis is an integral part of the statistical toolbox. However, while most domains of classical statistics have embraced deep learning, survival analysis only recently gained some minor attention from the deep learning community. This recent development is likely in part motivated by the COVID-19 pandemic. We aim to provide the tools needed to fully harness the potential of survival analysis in deep learning. On the one hand, we discuss how survival analysis connects to classification and regression. On the other hand, we provide technical tools. We provide a new loss function, evaluation metrics, and the first universal approximating network that provably produces survival curves without numeric integration. We show that the loss function and model outperform other approaches using a large numerical study.

Viaarxiv icon

Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data

Jun 15, 2023
Daniel Kreuter, Samuel Tull, Julian Gilbey, Jacobus Preller, BloodCounts! Consortium, John A. D. Aston, James H. F. Rudd, Suthesh Sivapalaratnam, Carola-Bibiane Schönlieb, Nicholas Gleadall, Michael Roberts

Figure 1 for Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data
Figure 2 for Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data
Figure 3 for Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data
Figure 4 for Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data

Clinical data is often affected by clinically irrelevant factors such as discrepancies between measurement devices or differing processing methods between sites. In the field of machine learning (ML), these factors are known as domains and the distribution differences they cause in the data are known as domain shifts. ML models trained using data from one domain often perform poorly when applied to data from another domain, potentially leading to wrong predictions. As such, developing machine learning models that can generalise well across multiple domains is a challenging yet essential task in the successful application of ML in clinical practice. In this paper, we propose a novel disentangled autoencoder (Dis-AE) neural network architecture that can learn domain-invariant data representations for multi-label classification of medical measurements even when the data is influenced by multiple interacting domain shifts at once. The model utilises adversarial training to produce data representations from which the domain can no longer be determined. We evaluate the model's domain generalisation capabilities on synthetic datasets and full blood count (FBC) data from blood donors as well as primary and secondary care patients, showing that Dis-AE improves model generalisation on multiple domains simultaneously while preserving clinically relevant information.

* 17 pages main body, 5 figures, 18 pages of appendix 
Viaarxiv icon

Classification of datasets with imputed missing values: does imputation quality matter?

Jun 16, 2022
Tolou Shadbahr, Michael Roberts, Jan Stanczuk, Julian Gilbey, Philip Teare, Sören Dittmer, Matthew Thorpe, Ramon Vinas Torne, Evis Sala, Pietro Lio, Mishal Patel, AIX-COVNET Collaboration, James H. F. Rudd, Tuomas Mirtti, Antti Rannikko, John A. D. Aston, Jing Tang, Carola-Bibiane Schönlieb

Figure 1 for Classification of datasets with imputed missing values: does imputation quality matter?
Figure 2 for Classification of datasets with imputed missing values: does imputation quality matter?
Figure 3 for Classification of datasets with imputed missing values: does imputation quality matter?
Figure 4 for Classification of datasets with imputed missing values: does imputation quality matter?

Classifying samples in incomplete datasets is a common aim for machine learning practitioners, but is non-trivial. Missing data is found in most real-world datasets and these missing values are typically imputed using established methods, followed by classification of the now complete, imputed, samples. The focus of the machine learning researcher is then to optimise the downstream classification performance. In this study, we highlight that it is imperative to consider the quality of the imputation. We demonstrate how the commonly used measures for assessing quality are flawed and propose a new class of discrepancy scores which focus on how well the method recreates the overall distribution of the data. To conclude, we highlight the compromised interpretability of classifier models trained using poorly imputed data.

* 17 pages, 10 figures, 30 supplementary pages 
Viaarxiv icon

Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review

Sep 01, 2020
Michael Roberts, Derek Driggs, Matthew Thorpe, Julian Gilbey, Michael Yeung, Stephan Ursprung, Angelica I. Aviles-Rivero, Christian Etmann, Cathal McCague, Lucian Beer, Jonathan R. Weir-McCall, Zhongzhao Teng, James H. F. Rudd, Evis Sala, Carola-Bibiane Schönlieb

Figure 1 for Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review
Figure 2 for Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review
Figure 3 for Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review
Figure 4 for Machine learning for COVID-19 detection and prognostication using chest radiographs and CT scans: a systematic methodological review

Background: Machine learning methods offer great potential for fast and accurate detection and prognostication of COVID-19 from standard-of-care chest radiographs (CXR) and computed tomography (CT) images. In this systematic review we critically evaluate the machine learning methodologies employed in the rapidly growing literature. Methods: In this systematic review we reviewed EMBASE via OVID, MEDLINE via PubMed, bioRxiv, medRxiv and arXiv for published papers and preprints uploaded from Jan 1, 2020 to June 24, 2020. Studies which consider machine learning models for the diagnosis or prognosis of COVID-19 from CXR or CT images were included. A methodology quality review of each paper was performed against established benchmarks to ensure the review focusses only on high-quality reproducible papers. This study is registered with PROSPERO [CRD42020188887]. Interpretation: Our review finds that none of the developed models discussed are of potential clinical use due to methodological flaws and underlying biases. This is a major weakness, given the urgency with which validated COVID-19 models are needed. Typically, we find that the documentation of a model's development is not sufficient to make the results reproducible and therefore of 168 candidate papers only 29 are deemed to be reproducible and subsequently considered in this review. We therefore encourage authors to use established machine learning checklists to ensure sufficient documentation is made available, and to follow the PROBAST (prediction model risk of bias assessment tool) framework to determine the underlying biases in their model development process and to mitigate these where possible. This is key to safe clinical implementation which is urgently needed.

* 25 pages, 3 figures, 2 tables 
Viaarxiv icon