Alert button
Picture for Jeroen van der Laak

Jeroen van der Laak

Alert button

LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset

Jan 16, 2023
Yiping Jiao, Jeroen van der Laak, Shadi Albarqouni, Zhang Li, Tao Tan, Abhir Bhalerao, Jiabo Ma, Jiamei Sun, Johnathon Pocock, Josien P. W. Pluim, Navid Alemi Koohbanani, Raja Muhammad Saad Bashir, Shan E Ahmed Raza, Sibo Liu, Simon Graham, Suzanne Wetstein, Syed Ali Khurram, Thomas Watson, Nasir Rajpoot, Mitko Veta, Francesco Ciompi

Figure 1 for LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset
Figure 2 for LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset
Figure 3 for LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset
Figure 4 for LYSTO: The Lymphocyte Assessment Hackathon and Benchmark Dataset

We introduce LYSTO, the Lymphocyte Assessment Hackathon, which was held in conjunction with the MICCAI 2019 Conference in Shenzen (China). The competition required participants to automatically assess the number of lymphocytes, in particular T-cells, in histopathological images of colon, breast, and prostate cancer stained with CD3 and CD8 immunohistochemistry. Differently from other challenges setup in medical image analysis, LYSTO participants were solely given a few hours to address this problem. In this paper, we describe the goal and the multi-phase organization of the hackathon; we describe the proposed methods and the on-site results. Additionally, we present post-competition results where we show how the presented methods perform on an independent set of lung cancer slides, which was not part of the initial competition, as well as a comparison on lymphocyte assessment between presented methods and a panel of pathologists. We show that some of the participants were capable to achieve pathologist-level performance at lymphocyte assessment. After the hackathon, LYSTO was left as a lightweight plug-and-play benchmark dataset on grand-challenge website, together with an automatic evaluation platform. LYSTO has supported a number of research in lymphocyte assessment in oncology. LYSTO will be a long-lasting educational challenge for deep learning and digital pathology, it is available at https://lysto.grand-challenge.org/.

* will be sumitted to IEEE-JBHI 
Viaarxiv icon

Domain adaptation strategies for cancer-independent detection of lymph node metastases

Jul 13, 2022
Péter Bándi, Maschenka Balkenhol, Marcory van Dijk, Bram van Ginneken, Jeroen van der Laak, Geert Litjens

Figure 1 for Domain adaptation strategies for cancer-independent detection of lymph node metastases
Figure 2 for Domain adaptation strategies for cancer-independent detection of lymph node metastases
Figure 3 for Domain adaptation strategies for cancer-independent detection of lymph node metastases
Figure 4 for Domain adaptation strategies for cancer-independent detection of lymph node metastases

Recently, large, high-quality public datasets have led to the development of convolutional neural networks that can detect lymph node metastases of breast cancer at the level of expert pathologists. Many cancers, regardless of the site of origin, can metastasize to lymph nodes. However, collecting and annotating high-volume, high-quality datasets for every cancer type is challenging. In this paper we investigate how to leverage existing high-quality datasets most efficiently in multi-task settings for closely related tasks. Specifically, we will explore different training and domain adaptation strategies, including prevention of catastrophic forgetting, for colon and head-and-neck cancer metastasis detection in lymph nodes. Our results show state-of-the-art performance on both cancer metastasis detection tasks. Furthermore, we show the effectiveness of repeated adaptation of networks from one cancer type to another to obtain multi-task metastasis detection networks. Last, we show that leveraging existing high-quality datasets can significantly boost performance on new target tasks and that catastrophic forgetting can be effectively mitigated using regularization.

Viaarxiv icon

Automated risk classification of colon biopsies based on semantic segmentation of histopathology images

Sep 16, 2021
John-Melle Bokhorsta, Iris D. Nagtegaal, Filippo Fraggetta, Simona Vatrano, Wilma Mesker, Michael Vieth, Jeroen van der Laak, Francesco Ciompi

Figure 1 for Automated risk classification of colon biopsies based on semantic segmentation of histopathology images
Figure 2 for Automated risk classification of colon biopsies based on semantic segmentation of histopathology images
Figure 3 for Automated risk classification of colon biopsies based on semantic segmentation of histopathology images
Figure 4 for Automated risk classification of colon biopsies based on semantic segmentation of histopathology images

Artificial Intelligence (AI) can potentially support histopathologists in the diagnosis of a broad spectrum of cancer types. In colorectal cancer (CRC), AI can alleviate the laborious task of characterization and reporting on resected biopsies, including polyps, the numbers of which are increasing as a result of CRC population screening programs, ongoing in many countries all around the globe. Here, we present an approach to address two major challenges in automated assessment of CRC histopathology whole-slide images. First, we present an AI-based method to segment multiple tissue compartments in the H\&E-stained whole-slide image, which provides a different, more perceptible picture of tissue morphology and composition. We test and compare a panel of state-of-the-art loss functions available for segmentation models, and provide indications about their use in histopathology image segmentation, based on the analysis of a) a multi-centric cohort of CRC cases from five medical centers in the Netherlands and Germany, and b) two publicly available datasets on segmentation in CRC. Second, we use the best performing AI model as the basis for a computer-aided diagnosis system (CAD) that classifies colon biopsies into four main categories that are relevant pathologically. We report the performance of this system on an independent cohort of more than 1,000 patients. The results show the potential of such an AI-based system to assist pathologists in diagnosis of CRC in the context of population screening. We have made the segmentation model available for research use on https://grand-challenge.org/algorithms/colon-tissue-segmentation/.

Viaarxiv icon

High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology

Jun 24, 2021
Johannes Lotz, Nick Weiss, Jeroen van der Laak, StefanHeldmann

Figure 1 for High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology
Figure 2 for High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology
Figure 3 for High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology
Figure 4 for High-resolution Image Registration of Consecutive and Re-stained Sections in Histopathology

We compare variational image registration in consectutive and re-stained sections from histopathology. We present a fully-automatic algorithm for non-parametric (nonlinear) image registration and apply it to a previously existing dataset from the ANHIR challenge (230 slide pairs, consecutive sections) and a new dataset (hybrid re-stained and consecutive, 81 slide pairs, ca. 3000 landmarks) which is made publicly available. Registration hyperparameters are obtained in the ANHIR dataset and applied to the new dataset without modification. In the new dataset, landmark errors after registration range from 13.2 micrometers for consecutive sections to 1 micrometer for re-stained sections. We observe that non-parametric registration leads to lower landmark errors in both cases, even though the effect is smaller in re-stained sections. The nucleus-level alignment after non-parametric registration of re-stained sections provides a valuable tool to generate automatic ground-truth for machine learning applications in histopathology.

Viaarxiv icon

Automated Scoring of Nuclear Pleomorphism Spectrum with Pathologist-level Performance in Breast Cancer

Dec 24, 2020
Caner Mercan, Maschenka Balkenhol, Roberto Salgado, Mark Sherman, Philippe Vielh, Willem Vreuls, Antonio Polonia, Hugo M. Horlings, Wilko Weichert, Jodi M. Carter, Peter Bult, Matthias Christgen, Carsten Denkert, Koen van de Vijver, Jeroen van der Laak, Francesco Ciompi

Figure 1 for Automated Scoring of Nuclear Pleomorphism Spectrum with Pathologist-level Performance in Breast Cancer
Figure 2 for Automated Scoring of Nuclear Pleomorphism Spectrum with Pathologist-level Performance in Breast Cancer
Figure 3 for Automated Scoring of Nuclear Pleomorphism Spectrum with Pathologist-level Performance in Breast Cancer
Figure 4 for Automated Scoring of Nuclear Pleomorphism Spectrum with Pathologist-level Performance in Breast Cancer

Nuclear pleomorphism, defined herein as the extent of abnormalities in the overall appearance of tumor nuclei, is one of the components of the three-tiered breast cancer grading. Given that nuclear pleomorphism reflects a continuous spectrum of variation, we trained a deep neural network on a large variety of tumor regions from the collective knowledge of several pathologists, without constraining the network to the traditional three-category classification. We also motivate an additional approach in which we discuss the additional benefit of normal epithelium as baseline, following the routine clinical practice where pathologists are trained to score nuclear pleomorphism in tumor, having the normal breast epithelium for comparison. In multiple experiments, our fully-automated approach could achieve top pathologist-level performance in select regions of interest as well as at whole slide images, compared to ten and four pathologists, respectively.

* 16 pages, 11 figures 
Viaarxiv icon

HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images

Jun 22, 2020
Mart van Rijthoven, Maschenka Balkenhol, Karina Siliņa, Jeroen van der Laak, Francesco Ciompi

Figure 1 for HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
Figure 2 for HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
Figure 3 for HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images
Figure 4 for HookNet: multi-resolution convolutional neural networks for semantic segmentation in histopathology whole-slide images

We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentricpatches at multiple resolutions with different fields of view are used to feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. Weshow the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation

Viaarxiv icon

Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels

Jun 05, 2020
Hans Pinckaers, Wouter Bulten, Jeroen van der Laak, Geert Litjens

Figure 1 for Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels
Figure 2 for Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels
Figure 3 for Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels
Figure 4 for Detection of prostate cancer in whole-slide images through end-to-end training with image-level labels

Prostate cancer is the most prevalent cancer among men in Western countries, with 1.1 million new diagnoses every year. The gold standard for the diagnosis of prostate cancer is a pathologists' evaluation of prostate tissue. To potentially assist pathologists deep-learning-based cancer detection systems have been developed. Many of the state-of-the-art models are patch-based convolutional neural networks, as the use of entire scanned slides is hampered by memory limitations on accelerator cards. Patch-based systems typically require detailed, pixel-level annotations for effective training. However, such annotations are seldom readily available, in contrast to the clinical reports of pathologists, which contain slide-level labels. As such, developing algorithms which do not require manual pixel-wise annotations, but can learn using only the clinical report would be a significant advancement for the field. In this paper, we propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end on 4712 prostate biopsies. The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB. We show that modern CNNs, trained using our streaming approach, can extract meaningful features from high-resolution images without additional heuristics, reaching similar performance as state-of-the-art patch-based and multiple-instance learning methods. By circumventing the need for manual annotations, this approach can function as a blueprint for other tasks in histopathological diagnosis. The source code to reproduce the streaming models is available at https://github.com/DIAGNijmegen/pathology-streaming-pipeline .

Viaarxiv icon

Extending Unsupervised Neural Image Compression With Supervised Multitask Learning

Apr 15, 2020
David Tellez, Diederik Hoppener, Cornelis Verhoef, Dirk Grunhagen, Pieter Nierop, Michal Drozdzal, Jeroen van der Laak, Francesco Ciompi

Figure 1 for Extending Unsupervised Neural Image Compression With Supervised Multitask Learning
Figure 2 for Extending Unsupervised Neural Image Compression With Supervised Multitask Learning
Figure 3 for Extending Unsupervised Neural Image Compression With Supervised Multitask Learning
Figure 4 for Extending Unsupervised Neural Image Compression With Supervised Multitask Learning

We focus on the problem of training convolutional neural networks on gigapixel histopathology images to predict image-level targets. For this purpose, we extend Neural Image Compression (NIC), an image compression framework that reduces the dimensionality of these images using an encoder network trained unsupervisedly. We propose to train this encoder using supervised multitask learning (MTL) instead. We applied the proposed MTL NIC to two histopathology datasets and three tasks. First, we obtained state-of-the-art results in the Tumor Proliferation Assessment Challenge of 2016 (TUPAC16). Second, we successfully classified histopathological growth patterns in images with colorectal liver metastasis (CLM). Third, we predicted patient risk of death by learning directly from overall survival in the same CLM data. Our experimental results suggest that the representations learned by the MTL objective are: (1) highly specific, due to the supervised training signal, and (2) transferable, since the same features perform well across different tasks. Additionally, we trained multiple encoders with different training objectives, e.g. unsupervised and variants of MTL, and observed a positive correlation between the number of tasks in MTL and the system performance on the TUPAC16 dataset.

* Medical Imaging with Deep Learning 2020 (MIDL20) 
Viaarxiv icon

Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists

Feb 11, 2020
Wouter Bulten, Maschenka Balkenhol, Jean-Joël Awoumou Belinga, Américo Brilhante, Aslı Çakır, Xavier Farré, Katerina Geronatsiou, Vincent Molinié, Guilherme Pereira, Paromita Roy, Günter Saile, Paulo Salles, Ewout Schaafsma, Joëlle Tschui, Anne-Marie Vos, Hester van Boven, Robert Vink, Jeroen van der Laak, Christina Hulsbergen-van de Kaa, Geert Litjens

Figure 1 for Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists
Figure 2 for Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists
Figure 3 for Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists
Figure 4 for Artificial Intelligence Assistance Significantly Improves Gleason Grading of Prostate Biopsies by Pathologists

While the Gleason score is the most important prognostic marker for prostate cancer patients, it suffers from significant observer variability. Artificial Intelligence (AI) systems, based on deep learning, have proven to achieve pathologist-level performance at Gleason grading. However, the performance of such systems can degrade in the presence of artifacts, foreign tissue, or other anomalies. Pathologists integrating their expertise with feedback from an AI system could result in a synergy that outperforms both the individual pathologist and the system. Despite the hype around AI assistance, existing literature on this topic within the pathology domain is limited. We investigated the value of AI assistance for grading prostate biopsies. A panel of fourteen observers graded 160 biopsies with and without AI assistance. Using AI, the agreement of the panel with an expert reference standard significantly increased (quadratically weighted Cohen's kappa, 0.799 vs 0.872; p=0.018). Our results show the added value of AI systems for Gleason grading, but more importantly, show the benefits of pathologist-AI synergy.

* 21 pages, 5 figures 
Viaarxiv icon