Alert button
Picture for Theresa Neubauer

Theresa Neubauer

Alert button

Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods

Jan 24, 2023
Astrid Berg, Eva Vandersmissen, Maria Wimmer, David Major, Theresa Neubauer, Dimitrios Lenis, Jeroen Cant, Annemiek Snoeckx, Katja Bühler

Figure 1 for Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods
Figure 2 for Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods
Figure 3 for Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods
Figure 4 for Employing similarity to highlight differences: On the impact of anatomical assumptions in chest X-ray registration methods

To facilitate both the detection and the interpretation of findings in chest X-rays, comparison with a previous image of the same patient is very valuable to radiologists. Today, the most common approach for deep learning methods to automatically inspect chest X-rays disregards the patient history and classifies only single images as normal or abnormal. Nevertheless, several methods for assisting in the task of comparison through image registration have been proposed in the past. However, as we illustrate, they tend to miss specific types of pathological changes like cardiomegaly and effusion. Due to assumptions on fixed anatomical structures or their measurements of registration quality, they produce unnaturally deformed warp fields impacting visualization of differences between moving and fixed images. We aim to overcome these limitations, through a new paradigm based on individual rib pair segmentation for anatomy penalized registration. Our method proves to be a natural way to limit the folding percentage of the warp field to 1/6 of the state of the art while increasing the overlap of ribs by more than 25%, implying difference images showing pathological changes overlooked by other methods. We develop an anatomically penalized convolutional multi-stage solution on the National Institutes of Health (NIH) data set, starting from less than 25 fully and 50 partly labeled training images, employing sequential instance memory segmentation with hole dropout, weak labeling, coarse-to-fine refinement and Gaussian mixture model histogram matching. We statistically evaluate the benefits of our method and highlight the limits of currently used metrics for registration of chest X-rays.

* Computers in Biology and Medicine, Volume 154, 2023, 106543, ISSN 0010-4825  
Viaarxiv icon

Anomaly Detection using Generative Models and Sum-Product Networks in Mammography Scans

Oct 12, 2022
Marc Dietrichstein, David Major, Maria Wimmer, Dimitrios Lenis, Philip Winter, Astrid Berg, Theresa Neubauer, Katja Bühler

Unsupervised anomaly detection models which are trained solely by healthy data, have gained importance in the recent years, as the annotation of medical data is a tedious task. Autoencoders and generative adversarial networks are the standard anomaly detection methods that are utilized to learn the data distribution. However, they fall short when it comes to inference and evaluation of the likelihood of test samples. We propose a novel combination of generative models and a probabilistic graphical model. After encoding image samples by autoencoders, the distribution of data is modeled by Random and Tensorized Sum-Product Networks ensuring exact and efficient inference at test time. We evaluate different autoencoder architectures in combination with Random and Tensorized Sum-Product Networks on mammography images using patch-wise processing and observe superior performance over utilizing the models standalone and state-of-the-art in anomaly detection for medical data.

* LNCS 13609 (2022)  
* Submitted to DGM4MICCAI 2022 Workshop. This preprint has not undergone peer review (when applicable) or any post-submission improvements or corrections. The Version of Record of this contribution is published in LNCS 13609, and is available online at https://doi.org/10.1007/978-3-031-18576-2_8 
Viaarxiv icon

Multi-task fusion for improving mammography screening data classification

Dec 01, 2021
Maria Wimmer, Gert Sluiter, David Major, Dimitrios Lenis, Astrid Berg, Theresa Neubauer, Katja Bühler

Figure 1 for Multi-task fusion for improving mammography screening data classification
Figure 2 for Multi-task fusion for improving mammography screening data classification
Figure 3 for Multi-task fusion for improving mammography screening data classification
Figure 4 for Multi-task fusion for improving mammography screening data classification

Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.

* Accepted for publication in IEEE Transactions on Medical Imaging 
Viaarxiv icon

Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data

Sep 24, 2020
Theresa Neubauer, Maria Wimmer, Astrid Berg, David Major, Dimitrios Lenis, Thomas Beyer, Jelena Saponjski, Katja Bühler

Figure 1 for Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data
Figure 2 for Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data
Figure 3 for Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data

Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods. Typically, studies dealing with this topic fuse multimodal image data to improve the tumor segmentation contour for a single imaging modality. However, they do not take into account that tumor characteristics are emphasized differently by each modality, which affects the tumor delineation. Thus, the tumor segmentation is modality- and task-dependent. This is especially the case for soft tissue sarcomas, where, due to necrotic tumor tissue, the segmentation differs vastly. Closing this gap, we develop a modalityspecific sarcoma segmentation model that utilizes multimodal image data to improve the tumor delineation on each individual modality. We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches, and the use of resource-effcient densely connected convolutional layers. We further conduct experiments to analyze how different input modalities and encoder-decoder fusion strategies affect the segmentation result. We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans. The results show that our multimodal co-segmentation model provides better modality-specific tumor segmentation than models using only the PET or MRI (T1 and T2) scan as input.

* Accepted for publication at Multimodal Learning for Clinical Decision Support Workshop at MICCAI 2020 (edit: corrected typos and model name in Fig. 3, added missing circles in Table 1) 
Viaarxiv icon