Alert button
Picture for Karol Gotkowski

Karol Gotkowski

Alert button

[Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT

Feb 13, 2023
Karol Gotkowski, Shuvam Gupta, Jose R. A. Godinho, Camila G. S. Tochtrop, Klaus H. Maier-Hein, Fabian Isensee

Figure 1 for [Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT
Figure 2 for [Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT
Figure 3 for [Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT
Figure 4 for [Work in progress] Scalable, out-of-the box segmentation of individual particles from mineral samples acquired with micro CT

Minerals are indispensable for a functioning modern society. Yet, their supply is limited causing a need for optimizing their exploration and extraction both from ores and recyclable materials. Typically, these processes must be meticulously adapted to the precise properties of the processed particles, requiring an extensive characterization of their shapes, appearances as well as the overall material composition. Current approaches perform this analysis based on bulk segmentation and characterization of particles, and rely on rudimentary postprocessing techniques to separate touching particles. However, due to their inability to reliably perform this separation as well as the need to retrain or reconfigure most methods for each new image, these approaches leave untapped potential to be leveraged. Here, we propose an instance segmentation method that is able to extract individual particles from large micro CT images taken from mineral samples embedded in an epoxy matrix. Our approach is based on the powerful nnU-Net framework, introduces a particle size normalization, makes use of a border-core representation to enable instance segmentation and is trained with a large dataset containing particles of numerous different materials and minerals. We demonstrate that our approach can be applied out-of-the box to a large variety of particle types, including materials and appearances that have not been part of the training set. Thus, no further manual annotations and retraining are required when applying the method to new mineral samples, enabling substantially higher scalability of experiments than existing methods. Our code and dataset are made publicly available.

Viaarxiv icon

Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation

Aug 05, 2022
Camila Gonzalez, Karol Gotkowski, Moritz Fuchs, Andreas Bucher, Armin Dadras, Ricarda Fischbach, Isabel Kaltenborn, Anirban Mukhopadhyay

Figure 1 for Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation
Figure 2 for Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation
Figure 3 for Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation
Figure 4 for Distance-based detection of out-of-distribution silent failures for Covid-19 lung lesion segmentation

Automatic segmentation of ground glass opacities and consolidations in chest computer tomography (CT) scans can potentially ease the burden of radiologists during times of high resource utilisation. However, deep learning models are not trusted in the clinical routine due to failing silently on out-of-distribution (OOD) data. We propose a lightweight OOD detection method that leverages the Mahalanobis distance in the feature space and seamlessly integrates into state-of-the-art segmentation pipelines. The simple approach can even augment pre-trained models with clinically relevant uncertainty quantification. We validate our method across four chest CT distribution shifts and two magnetic resonance imaging applications, namely segmentation of the hippocampus and the prostate. Our results show that the proposed method effectively detects far- and near-OOD samples across all explored scenarios.

Viaarxiv icon

Detecting when pre-trained nnU-Net models fail silently for Covid-19 lung lesion segmentation

Jul 14, 2021
Camila Gonzalez, Karol Gotkowski, Andreas Bucher, Ricarda Fischbach, Isabel Kaltenborn, Anirban Mukhopadhyay

Figure 1 for Detecting when pre-trained nnU-Net models fail silently for Covid-19 lung lesion segmentation
Figure 2 for Detecting when pre-trained nnU-Net models fail silently for Covid-19 lung lesion segmentation
Figure 3 for Detecting when pre-trained nnU-Net models fail silently for Covid-19 lung lesion segmentation
Figure 4 for Detecting when pre-trained nnU-Net models fail silently for Covid-19 lung lesion segmentation

Automatic segmentation of lung lesions in computer tomography has the potential to ease the burden of clinicians during the Covid-19 pandemic. Yet predictive deep learning models are not trusted in the clinical routine due to failing silently in out-of-distribution (OOD) data. We propose a lightweight OOD detection method that exploits the Mahalanobis distance in the feature space. The proposed approach can be seamlessly integrated into state-of-the-art segmentation pipelines without requiring changes in model architecture or training procedure, and can therefore be used to assess the suitability of pre-trained models to new data. We validate our method with a patch-based nnU-Net architecture trained with a multi-institutional dataset and find that it effectively detects samples that the model segments incorrectly.

Viaarxiv icon

M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning

Jul 01, 2020
Karol Gotkowski, Camila Gonzalez, Andreas Bucher, Anirban Mukhopadhyay

Figure 1 for M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning
Figure 2 for M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning
Figure 3 for M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning
Figure 4 for M3d-CAM: A PyTorch library to generate 3D data attention maps for medical deep learning

M3d-CAM is an easy to use library for generating attention maps of CNN-based PyTorch models improving the interpretability of model predictions for humans. The attention maps can be generated with multiple methods like Guided Backpropagation, Grad-CAM, Guided Grad-CAM and Grad-CAM++. These attention maps visualize the regions in the input data that influenced the model prediction the most at a certain layer. Furthermore, M3d-CAM supports 2D and 3D data for the task of classification as well as for segmentation. A key feature is also that in most cases only a single line of code is required for generating attention maps for a model making M3d-CAM basically plug and play.

Viaarxiv icon