In cardiac magnetic resonance (CMR) imaging, a 3D high-resolution segmentation of the heart is essential for detailed description of its anatomical structures. However, due to the limit of acquisition duration and respiratory/cardiac motion, stacks of multi-slice 2D images are acquired in clinical routine. The segmentation of these images provides a low-resolution representation of cardiac anatomy, which may contain artefacts caused by motion. Here we propose a novel latent optimisation framework that jointly performs motion correction and super resolution for cardiac image segmentations. Given a low-resolution segmentation as input, the framework accounts for inter-slice motion in cardiac MR imaging and super-resolves the input into a high-resolution segmentation consistent with input. A multi-view loss is incorporated to leverage information from both short-axis view and long-axis view of cardiac imaging. To solve the inverse problem, iterative optimisation is performed in a latent space, which ensures the anatomical plausibility. This alleviates the need of paired low-resolution and high-resolution images for supervised learning. Experiments on two cardiac MR datasets show that the proposed framework achieves high performance, comparable to state-of-the-art super-resolution approaches and with better cross-domain generalisability and anatomical plausibility.
Deep learning based approaches have proven promising to model omics data. However, one of the current limitations compared to statistical and traditional machine learning approaches is the lack of explainability, which not only reduces the reliability, but limits the potential for acquiring novel knowledge from unpicking the "black-box" models. Here we present XOmiVAE, a novel interpretable deep learning model for cancer classification using high-dimensional omics data. XOmiVAE is able to obtain contribution values of each gene and latent dimension for a specific prediction, and the correlation between genes and the latent dimensions. It is also revealed that XOmiVAE can explain both the supervised classification and the unsupervised clustering results from the deep learning network. To the best of our knowledge, XOmiVAE is one of the first activated-based deep learning interpretation method to explain novel clusters generated by variational autoencoders. The results generated by XOmiVAE were validated by both the biomedical knowledge and the performance of downstream tasks. XOmiVAE explanations of deep learning based cancer classification and clustering aligned with current domain knowledge including biological annotation and literature, which shows great potential for novel biomedical knowledge discovery from deep learning models. The top XOmiVAE selected genes and dimensions shown significant influence to the performance of cancer classification. Additionally, we offer important steps to consider when interpreting deep learning models for tumour classification. For instance, we demonstrate the importance of choosing background samples that makes biological sense and the limitations of connection weight based methods to explain latent dimensions.
This paper presents an approach to improve the forecast of computational fluid dynamics (CFD) simulations of urban air pollution using deep learning, and most specifically adversarial training. This adversarial approach aims to reduce the divergence of the forecasts from the underlying physical model. Our two-step method integrates a Principal Components Analysis (PCA) based adversarial autoencoder (PC-AAE) with adversarial Long short-term memory (LSTM) networks. Once the reduced-order model (ROM) of the CFD solution is obtained via PCA, an adversarial autoencoder is used on the principal components time series. Subsequentially, a Long Short-Term Memory network (LSTM) is adversarially trained on the latent space produced by the PC-AAE to make forecasts. Once trained, the adversarially trained LSTM outperforms a LSTM trained in a classical way. The study area is in South London, including three-dimensional velocity vectors in a busy traffic junction.
Epidemiology models play a key role in understanding and responding to the COVID-19 pandemic. In order to build those models, scientists need to understand contributing factors and their relative importance. A large strand of literature has identified the importance of airflow to mitigate droplets and far-field aerosol transmission risks. However, the specific factors contributing to higher or lower contamination in various settings have not been clearly defined and quantified. As part of the MOAI project (https://moaiapp.com), we are developing a privacy-preserving test and trace app to enable infection cluster investigators to get in touch with patients without having to know their identity. This approach allows involving users in the fight against the pandemic by contributing additional information in the form of anonymous research questionnaires. We first describe how the questionnaire was designed, and the synthetic data was generated based on a review we carried out on the latest available literature. We then present a model to evaluate the risk exposition of a user for a given setting. We finally propose a temporal addition to the model to evaluate the risk exposure over time for a given user.
A small change of design semantics may affect a user's satisfaction with a product. To modify a design semantic of a given product from personalised brain activity via adversarial learning, in this work, we propose a deep generative transformation model to modify product semantics from the brain signal. We attempt to accomplish such synthesis: 1) synthesising the product image with new features corresponding to EEG signal; 2) maintaining the other image features that irrelevant to EEG signal. We leverage the idea of StarGAN and the model is designed to synthesise products with preferred design semantics (colour & shape) via adversarial learning from brain activity, and is applied with a case study to generate shoes with different design semantics from recorded EEG signals. To verify our proposed cognitive transformation model, a case study has been presented. The results work as a proof-of-concept that our framework has the potential to synthesis product semantic from brain activity.
The outbreak of the coronavirus disease 2019 (COVID-19) has now spread throughout the globe infecting over 100 million people and causing the death of over 2.2 million people. Thus, there is an urgent need to study the dynamics of epidemiological models to gain a better understanding of how such diseases spread. While epidemiological models can be computationally expensive, recent advances in machine learning techniques have given rise to neural networks with the ability to learn and predict complex dynamics at reduced computational costs. Here we introduce two digital twins of a SEIRS model applied to an idealised town. The SEIRS model has been modified to take account of spatial variation and, where possible, the model parameters are based on official virus spreading data from the UK. We compare predictions from a data-corrected Bidirectional Long Short-Term Memory network and a predictive Generative Adversarial Network. The predictions given by these two frameworks are accurate when compared to the original SEIRS model data. Additionally, these frameworks are data-agnostic and could be applied to towns, idealised or real, in the UK or in other countries. Also, more compartments could be included in the SEIRS model, in order to study more realistic epidemiological behaviour.
High-dimensional omics data contains intrinsic biomedical information that is crucial for personalised medicine. Nevertheless, it is challenging to capture them from the genome-wide data due to the large number of molecular features and small number of available samples, which is also called "the curse of dimensionality" in machine learning. To tackle this problem and pave the way for machine learning aided precision medicine, we proposed a unified multi-task deep learning framework called OmiEmbed to capture a holistic and relatively precise profile of phenotype from high-dimensional omics data. The deep embedding module of OmiEmbed learnt an omics embedding that mapped multiple omics data types into a latent space with lower dimensionality. Based on the new representation of multi-omics data, different downstream networks of OmiEmbed were trained together with the multi-task strategy to predict the comprehensive phenotype profile of each sample. We trained the model on two publicly available omics datasets to evaluate the performance of OmiEmbed. The OmiEmbed model achieved promising results for multiple downstream tasks including dimensionality reduction, tumour type classification, multi-omics integration, demographic and clinical feature reconstruction, and survival prediction. Instead of training and applying different downstream networks separately, the multi-task strategy combined them together and conducted multiple tasks simultaneously and efficiently. The model achieved better performance with the multi-task strategy comparing to training them individually. OmiEmbed is a powerful tool to accurately capture comprehensive phenotypic information from high-dimensional omics data and has a great potential to facilitate more accurate and personalised clinical decision making.
This paper presents an approach to improve computational fluid dynamics simulations forecasts of air pollution using deep learning. Our method, which integrates Principal Components Analysis (PCA) and adversarial training, is a way to improve the forecast skill of reduced order models obtained from the original model solution. Once the reduced-order model (ROM) is obtained via PCA, a Long Short-Term Memory network (LSTM) is adversarially trained on the ROM to make forecasts. Once trained, the adversarially trained LSTM outperforms a LSTM trained in a classical way. The study area is in London, including velocities and a concentration tracer that replicates a busy traffic junction. This adversarially trained LSTM-based approach is used on the ROM in order to produce faster forecasts of the air pollution tracer.
Along with the blooming of AI and Machine Learning-based applications and services, data privacy and security have become a critical challenge. Conventionally, data is collected and aggregated in a data centre on which machine learning models are trained. This centralised approach has induced severe privacy risks to personal data leakage, misuse, and abuse. Furthermore, in the era of the Internet of Things and big data in which data is essentially distributed, transferring a vast amount of data to a data centre for processing seems to be a cumbersome solution. This is not only because of the difficulties in transferring and sharing data across data sources but also the challenges on complying with rigorous data protection regulations and complicated administrative procedures such as the EU General Data Protection Regulation (GDPR). In this respect, Federated learning (FL) emerges as a prospective solution that facilitates distributed collaborative learning without disclosing original training data whilst naturally complying with the GDPR. Recent research has demonstrated that retaining data and computation on-device in FL is not sufficient enough for privacy-guarantee. This is because ML model parameters exchanged between parties in an FL system still conceal sensitive information, which can be exploited in some privacy attacks. Therefore, FL systems shall be empowered by efficient privacy-preserving techniques to comply with the GDPR. This article is dedicated to surveying on the state-of-the-art privacy-preserving techniques which can be employed in FL in a systematic fashion, as well as how these techniques mitigate data security and privacy risks. Furthermore, we provide insights into the challenges along with prospective approaches following the GDPR regulatory guidelines that an FL system shall implement to comply with the GDPR.