Alert button
Picture for André Homeyer

André Homeyer

Alert button

Digitization of Pathology Labs: A Review of Lessons Learned

Jun 07, 2023
Lars Ole Schwen, Tim-Rasmus Kiehl, Rita Carvalho, Norman Zerbe, André Homeyer

Figure 1 for Digitization of Pathology Labs: A Review of Lessons Learned
Figure 2 for Digitization of Pathology Labs: A Review of Lessons Learned
Figure 3 for Digitization of Pathology Labs: A Review of Lessons Learned

Pathology laboratories are increasingly using digital workflows. This has the potential of increasing lab efficiency, but the digitization process also involves major challenges. Several reports have been published describing the individual experiences of specific laboratories with the digitization process. However, a comprehensive overview of the lessons learned is still lacking. We provide an overview of the lessons learned for different aspects of the digitization process, including digital case management, digital slide reading, and computer-aided slide reading. We also cover metrics used for monitoring performance and pitfalls and corresponding values observed in practice. The overview is intended to help pathologists, IT decision-makers, and administrators to benefit from the experiences of others and to implement the digitization process in an optimal way to make their own laboratory future-proof.

* 22 pages, 1 figure; corrected typo 
Viaarxiv icon

The NCI Imaging Data Commons as a platform for reproducible research in computational pathology

Mar 16, 2023
Daniela P. Schacherer, Markus D. Herrmann, David A. Clunie, Henning Höfener, William Clifford, William J. R. Longabaugh, Steve Pieper, Ron Kikinis, Andrey Fedorov, André Homeyer

Figure 1 for The NCI Imaging Data Commons as a platform for reproducible research in computational pathology
Figure 2 for The NCI Imaging Data Commons as a platform for reproducible research in computational pathology
Figure 3 for The NCI Imaging Data Commons as a platform for reproducible research in computational pathology
Figure 4 for The NCI Imaging Data Commons as a platform for reproducible research in computational pathology

Objective: Reproducibility is critical for translating machine learning-based (ML) solutions in computational pathology (CompPath) into practice. However, an increasing number of studies report difficulties in reproducing ML results. The NCI Imaging Data Commons (IDC) is a public repository of >120 cancer image collections, including >38,000 whole-slide images (WSIs), that is designed to be used with cloud-based ML services. Here, we explore the potential of the IDC to facilitate reproducibility of CompPath research. Materials and Methods: The IDC realizes the FAIR principles: All images are encoded according to the DICOM standard, persistently identified, discoverable via rich metadata, and accessible via open tools. Taking advantage of this, we implemented two experiments in which a representative ML-based method for classifying lung tumor tissue was trained and/or evaluated on different datasets from the IDC. To assess reproducibility, the experiments were run multiple times with independent but identically configured sessions of common ML services. Results: The AUC values of different runs of the same experiment were generally consistent and in the same order of magnitude as a similar, previously published study. However, there were occasional small variations in AUC values of up to 0.044, indicating a practical limit to reproducibility. Discussion and conclusion: By realizing the FAIR principles, the IDC enables other researchers to reuse exactly the same datasets. Cloud-based ML services enable others to run CompPath experiments in an identically configured computing environment without having to own high-performance hardware. The combination of both makes it possible to approach the reproducibility limit.

Viaarxiv icon

Recommendations on test datasets for evaluating AI solutions in pathology

Apr 21, 2022
André Homeyer, Christian Geißler, Lars Ole Schwen, Falk Zakrzewski, Theodore Evans, Klaus Strohmenger, Max Westphal, Roman David Bülow, Michaela Kargl, Aray Karjauv, Isidre Munné-Bertran, Carl Orge Retzlaff, Adrià Romero-López, Tomasz Sołtysiński, Markus Plass, Rita Carvalho, Peter Steinbach, Yu-Chia Lan, Nassim Bouteldja, David Haber, Mateo Rojas-Carulla, Alireza Vafaei Sadr, Matthias Kraft, Daniel Krüger, Rutger Fick, Tobias Lang, Peter Boor, Heimo Müller, Peter Hufnagl, Norman Zerbe

Figure 1 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 2 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 3 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 4 for Recommendations on test datasets for evaluating AI solutions in pathology

Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations for the collection of test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help regulatory agencies and end users verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.

Viaarxiv icon

Evaluating Generic Auto-ML Tools for Computational Pathology

Dec 07, 2021
Lars Ole Schwen, Daniela Schacherer, Christian Geißler, André Homeyer

Figure 1 for Evaluating Generic Auto-ML Tools for Computational Pathology
Figure 2 for Evaluating Generic Auto-ML Tools for Computational Pathology
Figure 3 for Evaluating Generic Auto-ML Tools for Computational Pathology
Figure 4 for Evaluating Generic Auto-ML Tools for Computational Pathology

Image analysis tasks in computational pathology are commonly solved using convolutional neural networks (CNNs). The selection of a suitable CNN architecture and hyperparameters is usually done through exploratory iterative optimization, which is computationally expensive and requires substantial manual work. The goal of this article is to evaluate how generic tools for neural network architecture search and hyperparameter optimization perform for common use cases in computational pathology. For this purpose, we evaluated one on-premises and one cloud-based tool for three different classification tasks for histological images: tissue classification, mutation prediction, and grading. We found that the default CNN architectures and parameterizations of the evaluated AutoML tools already yielded classification performance on par with the original publications. Hyperparameter optimization for these tasks did not substantially improve performance, despite the additional computational effort. However, performance varied substantially between classifiers obtained from individual AutoML runs due to non-deterministic effects. Generic CNN architectures and AutoML tools could thus be a viable alternative to manually optimizing CNN architectures and parametrizations. This would allow developers of software solutions for computational pathology to focus efforts on harder-to-automate tasks such as data curation.

Viaarxiv icon