Alert button
Picture for Nikolay Burlutskiy

Nikolay Burlutskiy

Alert button

Interpretable histopathology-based prediction of disease relevant features in Inflammatory Bowel Disease biopsies using weakly-supervised deep learning

Mar 20, 2023
Ricardo Mokhtari, Azam Hamidinekoo, Daniel Sutton, Arthur Lewis, Bastian Angermann, Ulf Gehrmann, Pal Lundin, Hibret Adissu, Junmei Cairns, Jessica Neisen, Emon Khan, Daniel Marks, Nia Khachapuridze, Talha Qaiser, Nikolay Burlutskiy

Figure 1 for Interpretable histopathology-based prediction of disease relevant features in Inflammatory Bowel Disease biopsies using weakly-supervised deep learning
Figure 2 for Interpretable histopathology-based prediction of disease relevant features in Inflammatory Bowel Disease biopsies using weakly-supervised deep learning
Figure 3 for Interpretable histopathology-based prediction of disease relevant features in Inflammatory Bowel Disease biopsies using weakly-supervised deep learning
Figure 4 for Interpretable histopathology-based prediction of disease relevant features in Inflammatory Bowel Disease biopsies using weakly-supervised deep learning

Crohn's Disease (CD) and Ulcerative Colitis (UC) are the two main Inflammatory Bowel Disease (IBD) types. We developed deep learning models to identify histological disease features for both CD and UC using only endoscopic labels. We explored fine-tuning and end-to-end training of two state-of-the-art self-supervised models for predicting three different endoscopic categories (i) CD vs UC (AUC=0.87), (ii) normal vs lesional (AUC=0.81), (iii) low vs high disease severity score (AUC=0.80). We produced visual attention maps to interpret what the models learned and validated them with the support of a pathologist, where we observed a strong association between the models' predictions and histopathological inflammatory features of the disease. Additionally, we identified several cases where the model incorrectly predicted normal samples as lesional but were correct on the microscopic level when reviewed by the pathologist. This tendency of histological presentation to be more severe than endoscopic presentation was previously published in the literature. In parallel, we utilised a model trained on the Colon Nuclei Identification and Counting (CoNIC) dataset to predict and explore 6 cell populations. We observed correlation between areas enriched with the predicted immune cells in biopsies and the pathologist's feedback on the attention maps. Finally, we identified several cell level features indicative of disease severity in CD and UC. These models can enhance our understanding about the pathology behind IBD and can shape our strategies for patient stratification in clinical trials.

* Accepted to the Medical Imaging with Deep Learning (MIDL'23) 
Viaarxiv icon

Coupling Deep Imputation with Multitask Learning for Downstream Tasks on Genomics Data

Apr 28, 2022
Sophie Peacock, Etai Jacob, Nikolay Burlutskiy

Figure 1 for Coupling Deep Imputation with Multitask Learning for Downstream Tasks on Genomics Data
Figure 2 for Coupling Deep Imputation with Multitask Learning for Downstream Tasks on Genomics Data
Figure 3 for Coupling Deep Imputation with Multitask Learning for Downstream Tasks on Genomics Data
Figure 4 for Coupling Deep Imputation with Multitask Learning for Downstream Tasks on Genomics Data

Genomics data such as RNA gene expression, methylation and micro RNA expression are valuable sources of information for various clinical predictive tasks. For example, predicting survival outcomes, cancer histology type and other patients' related information is possible using not only clinical data but molecular data as well. Moreover, using these data sources together, for example in multitask learning, can boost the performance. However, in practice, there are many missing data points which leads to significantly lower patient numbers when analysing full cases, which in our setting refers to all modalities being present. In this paper we investigate how imputing data with missing values using deep learning coupled with multitask learning can help to reach state-of-the-art performance results using combined genomics modalities, RNA, micro RNA and methylation. We propose a generalised deep imputation method to impute values where a patient has all modalities present except one. Interestingly enough, deep imputation alone outperforms multitask learning alone for the classification and regression tasks across most combinations of modalities. In contrast, when using all modalities for survival prediction we observe that multitask learning alone outperforms deep imputation alone with statistical significance (adjusted p-value 0.03). Thus, both approaches are complementary when optimising performance for downstream predictive tasks.

* Accepted to International Joint Conference on Neural Networks (IJCNN) 2022 
Viaarxiv icon

Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data

Apr 15, 2019
Nikolay Burlutskiy, Nicolas Pinchaud, Feng Gu, Daniel Hägg, Mats Andersson, Lars Björk, Kristian Eurén, Cristina Svensson, Lena Kajland Wilén, Martin Hedlund

Figure 1 for Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data
Figure 2 for Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data
Figure 3 for Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data
Figure 4 for Segmenting Potentially Cancerous Areas in Prostate Biopsies using Semi-Automatically Annotated Data

Gleason grading specified in ISUP 2014 is the clinical standard in staging prostate cancer and the most important part of the treatment decision. However, the grading is subjective and suffers from high intra and inter-user variability. To improve the consistency and objectivity in the grading, we introduced glandular tissue WithOut Basal cells (WOB) as the ground truth. The presence of basal cells is the most accepted biomarker for benign glandular tissue and the absence of basal cells is a strong indicator of acinar prostatic adenocarcinoma, the most common form of prostate cancer. Glandular tissue can objectively be assessed as WOB or not WOB by using specific immunostaining for glandular tissue (Cytokeratin 8/18) and for basal cells (Cytokeratin 5/6 + p63). Even more, WOB allowed us to develop a semi-automated data generation pipeline to speed up the tremendously time consuming and expensive process of annotating whole slide images by pathologists. We generated 295 prostatectomy images exhaustively annotated with WOB. Then we used our Deep Learning Framework, which achieved the $2^{nd}$ best reported score in Camelyon17 Challenge, to train networks for segmenting WOB in needle biopsies. Evaluation of the model on 63 needle biopsies showed promising results which were improved further by finetuning the model on 118 biopsies annotated with WOB, achieving F1-score of 0.80 and Precision-Recall AUC of 0.89 at the pixel-level. Then we compared the performance of the model against 17 biopsies annotated independently by 3 pathologists using only H\&E staining. The comparison demonstrated that the model performed on a par with the pathologists. Finally, the model detected and accurately outlined existing WOB areas in two biopsies incorrectly annotated as totally WOB-free biopsies by three pathologists and in one biopsy by two pathologists.

* Accepted as oral presentation at Medical Imaging with Deep Learning (MIDL) 2019, July, London, England 
Viaarxiv icon

A Deep Learning Framework for Automatic Diagnosis in Lung Cancer

Jul 27, 2018
Nikolay Burlutskiy, Feng Gu, Lena Kajland Wilen, Max Backman, Patrick Micke

Figure 1 for A Deep Learning Framework for Automatic Diagnosis in Lung Cancer
Figure 2 for A Deep Learning Framework for Automatic Diagnosis in Lung Cancer

We developed a deep learning framework that helps to automatically identify and segment lung cancer areas in patients' tissue specimens. The study was based on a cohort of lung cancer patients operated at the Uppsala University Hospital. The tissues were reviewed by lung pathologists and then the cores were compiled to tissue micro-arrays (TMAs). For experiments, hematoxylin-eosin stained slides from 712 patients were scanned and then manually annotated. Then these scans and annotations were used to train segmentation models of the developed framework. The performance of the developed deep learning framework was evaluated on fully annotated TMA cores from 178 patients reaching pixel-wise precision of 0.80 and recall of 0.86. Finally, publicly available Stanford TMA cores were used to demonstrate high performance of the framework qualitatively.

* Presented as a poster at Medical Imaging with Deep Learning (MIDL) in Amsterdam, 4-6th July 2018 (http://midl.amsterdam/) 
Viaarxiv icon

Multi-Resolution Networks for Semantic Segmentation in Whole Slide Images

Jul 25, 2018
Feng Gu, Nikolay Burlutskiy, Mats Andersson, Lena Kajland Wilen

Figure 1 for Multi-Resolution Networks for Semantic Segmentation in Whole Slide Images
Figure 2 for Multi-Resolution Networks for Semantic Segmentation in Whole Slide Images
Figure 3 for Multi-Resolution Networks for Semantic Segmentation in Whole Slide Images
Figure 4 for Multi-Resolution Networks for Semantic Segmentation in Whole Slide Images

Digital pathology provides an excellent opportunity for applying fully convolutional networks (FCNs) to tasks, such as semantic segmentation of whole slide images (WSIs). However, standard FCNs face challenges with respect to multi-resolution, inherited from the pyramid arrangement of WSIs. As a result, networks specifically designed to learn and aggregate information at different levels are desired. In this paper, we propose two novel multi-resolution networks based on the popular `U-Net' architecture, which are evaluated on a benchmark dataset for binary semantic segmentation in WSIs. The proposed methods outperform the U-Net, demonstrating superior learning and generalization capabilities.

* Accepted by MICCAI COMPAY 2018 Workshop 
Viaarxiv icon