Alert button
Picture for Fayyaz Minhas

Fayyaz Minhas

Alert button

Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coven-try, UK

A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia

Jul 06, 2023
Adam J Shephard, Raja Muhammad Saad Bashir, Hanya Mahmood, Mostafa Jahanifar, Fayyaz Minhas, Shan E Ahmed Raza, Kris D McCombe, Stephanie G Craig, Jacqueline James, Jill Brooks, Paul Nankivell, Hisham Mehanna, Syed Ali Khurram, Nasir M Rajpoot

Figure 1 for A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia
Figure 2 for A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia
Figure 3 for A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia
Figure 4 for A Fully Automated and Explainable Algorithm for the Prediction of Malignant Transformation in Oral Epithelial Dysplasia

Oral epithelial dysplasia (OED) is a premalignant histopathological diagnosis given to lesions of the oral cavity. Its grading suffers from significant inter-/intra- observer variability, and does not reliably predict malignancy progression, potentially leading to suboptimal treatment decisions. To address this, we developed a novel artificial intelligence algorithm that can assign an Oral Malignant Transformation (OMT) risk score, based on histological patterns in the in Haematoxylin and Eosin stained whole slide images, to quantify the risk of OED progression. The algorithm is based on the detection and segmentation of nuclei within (and around) the epithelium using an in-house segmentation model. We then employed a shallow neural network fed with interpretable morphological/spatial features, emulating histological markers. We conducted internal cross-validation on our development cohort (Sheffield; n = 193 cases) followed by independent validation on two external cohorts (Birmingham and Belfast; n = 92 cases). The proposed OMTscore yields an AUROC = 0.74 in predicting whether an OED progresses to malignancy or not. Survival analyses showed the prognostic value of our OMTscore for predicting malignancy transformation, when compared to the manually-assigned WHO and binary grades. Analysis of the correctly predicted cases elucidated the presence of peri-epithelial and epithelium-infiltrating lymphocytes in the most predictive patches of cases that transformed (p < 0.0001). This is the first study to propose a completely automated algorithm for predicting OED transformation based on interpretable nuclear features, whilst being validated on external datasets. The algorithm shows better-than-human-level performance for prediction of OED malignant transformation and offers a promising solution to the challenges of grading OED in routine clinical practice.

Viaarxiv icon

Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout

May 08, 2023
Srijay Deshpande, Fayyaz Minhas, Nasir Rajpoot

Figure 1 for Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout
Figure 2 for Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout
Figure 3 for Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout
Figure 4 for Synthesis of Annotated Colorectal Cancer Tissue Images from Gland Layout

Generating annotated pairs of realistic tissue images along with their annotations is a challenging task in computational histopathology. Such synthetic images and their annotations can be useful in training and evaluation of algorithms in the domain of computational pathology. To address this, we present an interactive framework to generate pairs of realistic colorectal cancer histology images with corresponding tissue component masks from the input gland layout. The framework shows the ability to generate realistic qualitative tissue images preserving morphological characteristics including stroma, goblet cells and glandular lumen. We show the appearance of glands can be controlled by user inputs such as number of glands, their locations and sizes. We also validate the quality of generated annotated pair with help of the gland segmentation algorithm.

Viaarxiv icon

CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Mar 14, 2023
Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Martin Weigert, Uwe Schmidt, Wenhua Zhang, Jun Zhang, Sen Yang, Jinxi Xiang, Xiyue Wang, Josef Lorenz Rumberger, Elias Baumann, Peter Hirsch, Lihao Liu, Chenyang Hong, Angelica I. Aviles-Rivero, Ayushi Jain, Heeyoung Ahn, Yiyu Hong, Hussam Azzuni, Min Xu, Mohammad Yaqub, Marie-Claire Blache, Benoît Piégu, Bertrand Vernay, Tim Scherr, Moritz Böhland, Katharina Löffler, Jiachen Li, Weiqin Ying, Chixin Wang, Dagmar Kainmueller, Carola-Bibiane Schönlieb, Shuolin Liu, Dhairya Talsania, Yughender Meda, Prakash Mishra, Muhammad Ridzuan, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Banban Huang, Hsiang-Chin Chien, Ching-Ping Wang, Chia-Yen Lee, Hong-Kun Lin, Zaiyi Liu, Xipeng Pan, Chu Han, Jijun Cheng, Muhammad Dawood, Srijay Deshpande, Raja Muhammad Saad Bashir, Adam Shephard, Pedro Costa, João D. Nunes, Aurélio Campilho, Jaime S. Cardoso, Hrishikesh P S, Densen Puthussery, Devika R G, Jiji C V, Ye Zhang, Zijie Fang, Zhifan Lin, Yongbing Zhang, Chunhui Lin, Liukun Zhang, Lijian Mao, Min Wu, Vi Thi-Tuong Vo, Soo-Hyung Kim, Taebum Lee, Satoshi Kondo, Satoshi Kasai, Pranay Dumbhare, Vedant Phuse, Yash Dubey, Ankush Jamthikar, Trinh Thi Le Vuong, Jin Tae Kwak, Dorsa Ziaei, Hyun Jung, Tianyi Miao, David Snead, Shan E Ahmed Raza, Fayyaz Minhas, Nasir M. Rajpoot

Figure 1 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 2 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 3 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 4 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.

Viaarxiv icon

MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images

Feb 23, 2023
Mark Eastwood, Heba Sailem, Silviu Tudor, Xiaohong Gao, Judith Offman, Emmanouil Karteris, Angeles Montero Fernandez, Danny Jonigk, William Cookson, Miriam Moffatt, Sanjay Popat, Fayyaz Minhas, Jan Lukas Robertus

Figure 1 for MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images
Figure 2 for MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images
Figure 3 for MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images
Figure 4 for MesoGraph: Automatic Profiling of Malignant Mesothelioma Subtypes from Histological Images

Malignant mesothelioma is classified into three histological subtypes, Epithelioid, Sarcomatoid, and Biphasic according to the relative proportions of epithelioid and sarcomatoid tumor cells present. Biphasic tumors display significant populations of both cell types. This subtyping is subjective and limited by current diagnostic guidelines and can differ even between expert thoracic pathologists when characterising the continuum of relative proportions of epithelioid and sarcomatoid components using a three class system. In this work, we develop a novel dual-task Graph Neural Network (GNN) architecture with ranking loss to learn a model capable of scoring regions of tissue down to cellular resolution. This allows quantitative profiling of a tumor sample according to the aggregate sarcomatoid association score of all the cells in the sample. The proposed approach uses only core-level labels and frames the prediction task as a dual multiple instance learning (MIL) problem. Tissue is represented by a cell graph with both cell-level morphological and regional features. We use an external multi-centric test set from Mesobank, on which we demonstrate the predictive performance of our model. We validate our model predictions through an analysis of the typical morphological features of cells according to their predicted score, finding that some of the morphological differences identified by our model match known differences used by pathologists. We further show that the model score is predictive of patient survival with a hazard ratio of 2.30. The code for the proposed approach, along with the dataset, is available at: https://github.com/measty/MesoGraph.

Viaarxiv icon

Nuclear Segmentation and Classification: On Color & Compression Generalization

Jan 09, 2023
Quoc Dang Vu, Robert Jewsbury, Simon Graham, Mostafa Jahanifar, Shan E Ahmed Raza, Fayyaz Minhas, Abhir Bhalerao, Nasir Rajpoot

Since the introduction of digital and computational pathology as a field, one of the major problems in the clinical application of algorithms has been the struggle to generalize well to examples outside the distribution of the training data. Existing work to address this in both pathology and natural images has focused almost exclusively on classification tasks. We explore and evaluate the robustness of the 7 best performing nuclear segmentation and classification models from the largest computational pathology challenge for this problem to date, the CoNIC challenge. We demonstrate that existing state-of-the-art (SoTA) models are robust towards compression artifacts but suffer substantial performance reduction when subjected to shifts in the color domain. We find that using stain normalization to address the domain shift problem can be detrimental to the model performance. On the other hand, neural style transfer is more consistent in improving test performance when presented with large color variations in the wild.

* Oral presentation at MICCAI MLMI 2022, 7 pages, 6 figures 
Viaarxiv icon

SynCLay: Interactive Synthesis of Histology Images from Bespoke Cellular Layouts

Dec 28, 2022
Srijay Deshpande, Muhammad Dawood, Fayyaz Minhas, Nasir Rajpoot

Figure 1 for SynCLay: Interactive Synthesis of Histology Images from Bespoke Cellular Layouts
Figure 2 for SynCLay: Interactive Synthesis of Histology Images from Bespoke Cellular Layouts
Figure 3 for SynCLay: Interactive Synthesis of Histology Images from Bespoke Cellular Layouts
Figure 4 for SynCLay: Interactive Synthesis of Histology Images from Bespoke Cellular Layouts

Automated synthesis of histology images has several potential applications in computational pathology. However, no existing method can generate realistic tissue images with a bespoke cellular layout or user-defined histology parameters. In this work, we propose a novel framework called SynCLay (Synthesis from Cellular Layouts) that can construct realistic and high-quality histology images from user-defined cellular layouts along with annotated cellular boundaries. Tissue image generation based on bespoke cellular layouts through the proposed framework allows users to generate different histological patterns from arbitrary topological arrangement of different types of cells. SynCLay generated synthetic images can be helpful in studying the role of different types of cells present in the tumor microenvironmet. Additionally, they can assist in balancing the distribution of cellular counts in tissue images for designing accurate cellular composition predictors by minimizing the effects of data imbalance. We train SynCLay in an adversarial manner and integrate a nuclear segmentation and classification model in its training to refine nuclear structures and generate nuclear masks in conjunction with synthetic images. During inference, we combine the model with another parametric model for generating colon images and associated cellular counts as annotations given the grade of differentiation and cell densities of different cells. We assess the generated images quantitatively and report on feedback from trained pathologists who assigned realism scores to a set of images generated by the framework. The average realism score across all pathologists for synthetic images was as high as that for the real images. We also show that augmenting limited real data with the synthetic data generated by our framework can significantly boost prediction performance of the cellular composition prediction task.

Viaarxiv icon

One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification

Feb 28, 2022
Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Fayyaz Minhas, David Snead, Nasir Rajpoot

Figure 1 for One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification
Figure 2 for One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification
Figure 3 for One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification
Figure 4 for One Model is All You Need: Multi-Task Learning Enables Simultaneous Histology Image Segmentation and Classification

The recent surge in performance for image analysis of digitised pathology slides can largely be attributed to the advance of deep learning. Deep models can be used to initially localise various structures in the tissue and hence facilitate the extraction of interpretable features for biomarker discovery. However, these models are typically trained for a single task and therefore scale poorly as we wish to adapt the model for an increasing number of different tasks. Also, supervised deep learning models are very data hungry and therefore rely on large amounts of training data to perform well. In this paper we present a multi-task learning approach for segmentation and classification of nuclei, glands, lumen and different tissue regions that leverages data from multiple independent data sources. While ensuring that our tasks are aligned by the same tissue type and resolution, we enable simultaneous prediction with a single network. As a result of feature sharing, we also show that the learned representation can be used to improve downstream tasks, including nuclear classification and signet ring cell detection. As part of this work, we use a large dataset consisting of over 600K objects for segmentation and 440K patches for classification and make the data publicly available. We use our approach to process the colorectal subset of TCGA, consisting of 599 whole-slide images, to localise 377 million, 900K and 2.1 million nuclei, glands and lumen respectively. We make this resource available to remove a major barrier in the development of explainable models for computational pathology.

Viaarxiv icon

Insights into performance evaluation of com-pound-protein interaction prediction methods

Jan 28, 2022
Adiba Yaseen, Imran Amin, Naeem Akhter, Asa Ben-Hur, Fayyaz Minhas

Figure 1 for Insights into performance evaluation of com-pound-protein interaction prediction methods
Figure 2 for Insights into performance evaluation of com-pound-protein interaction prediction methods
Figure 3 for Insights into performance evaluation of com-pound-protein interaction prediction methods
Figure 4 for Insights into performance evaluation of com-pound-protein interaction prediction methods

Motivation: Machine learning based prediction of compound-protein interactions (CPIs) is important for drug design, screening and repurposing studies and can improve the efficiency and cost-effectiveness of wet lab assays. Despite the publication of many research papers reporting CPI predictors in the recent years, we have observed a number of fundamental issues in experiment design that lead to over optimistic estimates of model performance. Results: In this paper, we analyze the impact of several important factors affecting generalization perfor-mance of CPI predictors that are overlooked in existing work: 1. Similarity between training and test examples in cross-validation 2. The strategy for generating negative examples, in the absence of experimentally verified negative examples. 3. Choice of evaluation protocols and performance metrics and their alignment with real-world use of CPI predictors in screening large compound libraries. Using both an existing state-of-the-art method (CPI-NN) and a proposed kernel based approach, we have found that assessment of predictive performance of CPI predictors requires careful con-trol over similarity between training and test examples. We also show that random pairing for gen-erating synthetic negative examples for training and performance evaluation results in models with better generalization performance in comparison to more sophisticated strategies used in existing studies. Furthermore, we have found that our kernel based approach, despite its simple design, exceeds the prediction performance of CPI-NN. We have used the proposed model for compound screening of several proteins including SARS-CoV-2 Spike and Human ACE2 proteins and found strong evidence in support of its top hits. Availability: Code and raw experimental results available at https://github.com/adibayaseen/HKRCPI Contact: Fayyaz.minhas@warwick.ac.uk

* Supplementary information: Supplementary data files are available as part of the GitHub repository 
Viaarxiv icon

REET: Robustness Evaluation and Enhancement Toolbox for Computational Pathology

Jan 28, 2022
Alex Foote, Amina Asif, Nasir Rajpoot, Fayyaz Minhas

Figure 1 for REET: Robustness Evaluation and Enhancement Toolbox for Computational Pathology

Motivation: Digitization of pathology laboratories through digital slide scanners and advances in deep learning approaches for objective histological assessment have resulted in rapid progress in the field of computational pathology (CPath) with wide-ranging applications in medical and pharmaceutical research as well as clinical workflows. However, the estimation of robustness of CPath models to variations in input images is an open problem with a significant impact on the down-stream practical applicability, deployment and acceptability of these approaches. Furthermore, development of domain-specific strategies for enhancement of robustness of such models is of prime importance as well. Implementation and Availability: In this work, we propose the first domain-specific Robustness Evaluation and Enhancement Toolbox (REET) for computational pathology applications. It provides a suite of algorithmic strategies for enabling robustness assessment of predictive models with respect to specialized image transformations such as staining, compression, focusing, blurring, changes in spatial resolution, brightness variations, geometric changes as well as pixel-level adversarial perturbations. Furthermore, REET also enables efficient and robust training of deep learning pipelines in computational pathology. REET is implemented in Python and is available at the following URL: https://github.com/alexjfoote/reetoolbox. Contact: Fayyaz.minhas@warwick.ac.uk

Viaarxiv icon

Towards Launching AI Algorithms for Cellular Pathology into Clinical & Pharmaceutical Orbits

Dec 17, 2021
Amina Asif, Kashif Rajpoot, David Snead, Fayyaz Minhas, Nasir Rajpoot

Figure 1 for Towards Launching AI Algorithms for Cellular Pathology into Clinical & Pharmaceutical Orbits
Figure 2 for Towards Launching AI Algorithms for Cellular Pathology into Clinical & Pharmaceutical Orbits

Computational Pathology (CPath) is an emerging field concerned with the study of tissue pathology via computational algorithms for the processing and analysis of digitized high-resolution images of tissue slides. Recent deep learning based developments in CPath have successfully leveraged sheer volume of raw pixel data in histology images for predicting target parameters in the domains of diagnostics, prognostics, treatment sensitivity and patient stratification -- heralding the promise of a new data-driven AI era for both histopathology and oncology. With data serving as the fuel and AI as the engine, CPath algorithms are poised to be ready for takeoff and eventual launch into clinical and pharmaceutical orbits. In this paper, we discuss CPath limitations and associated challenges to enable the readers distinguish hope from hype and provide directions for future research to overcome some of the major challenges faced by this budding field to enable its launch into the two orbits.

Viaarxiv icon