Alert button
Picture for Annette Peters

Annette Peters

Alert button

FedNorm: Modality-Based Normalization in Federated Learning for Multi-Modal Liver Segmentation

May 23, 2022
Tobias Bernecker, Annette Peters, Christopher L. Schlett, Fabian Bamberg, Fabian Theis, Daniel Rueckert, Jakob Weiß, Shadi Albarqouni

Figure 1 for FedNorm: Modality-Based Normalization in Federated Learning for Multi-Modal Liver Segmentation
Figure 2 for FedNorm: Modality-Based Normalization in Federated Learning for Multi-Modal Liver Segmentation
Figure 3 for FedNorm: Modality-Based Normalization in Federated Learning for Multi-Modal Liver Segmentation
Figure 4 for FedNorm: Modality-Based Normalization in Federated Learning for Multi-Modal Liver Segmentation

Given the high incidence and effective treatment options for liver diseases, they are of great socioeconomic importance. One of the most common methods for analyzing CT and MRI images for diagnosis and follow-up treatment is liver segmentation. Recent advances in deep learning have demonstrated encouraging results for automatic liver segmentation. Despite this, their success depends primarily on the availability of an annotated database, which is often not available because of privacy concerns. Federated Learning has been recently proposed as a solution to alleviate these challenges by training a shared global model on distributed clients without access to their local databases. Nevertheless, Federated Learning does not perform well when it is trained on a high degree of heterogeneity of image data due to multi-modal imaging, such as CT and MRI, and multiple scanner types. To this end, we propose Fednorm and its extension \fednormp, two Federated Learning algorithms that use a modality-based normalization technique. Specifically, Fednorm normalizes the features on a client-level, while Fednorm+ employs the modality information of single slices in the feature normalization. Our methods were validated using 428 patients from six publicly available databases and compared to state-of-the-art Federated Learning algorithms and baseline models in heterogeneous settings (multi-institutional, multi-modal data). The experimental results demonstrate that our methods show an overall acceptable performance, achieve Dice per patient scores up to 0.961, consistently outperform locally trained models, and are on par or slightly better than centralized models.

* Under Review 
Viaarxiv icon

Predicting brain-age from raw T 1 -weighted Magnetic Resonance Imaging data using 3D Convolutional Neural Networks

Mar 22, 2021
Lukas Fisch, Jan Ernsting, Nils R. Winter, Vincent Holstein, Ramona Leenings, Marie Beisemann, Kelvin Sarink, Daniel Emden, Nils Opel, Ronny Redlich, Jonathan Repple, Dominik Grotegerd, Susanne Meinert, Niklas Wulms, Heike Minnerup, Jochen G. Hirsch, Thoralf Niendorf, Beate Endemann, Fabian Bamberg, Thomas Kröncke, Annette Peters, Robin Bülow, Henry Völzke, Oyunbileg von Stackelberg, Ramona Felizitas Sowade, Lale Umutlu, Börge Schmidt, Svenja Caspers, German National Cohort Study Center Consortium, Harald Kugel, Bernhard T. Baune, Tilo Kircher, Benjamin Risse, Udo Dannlowski, Klaus Berger, Tim Hahn

Figure 1 for Predicting brain-age from raw T 1 -weighted Magnetic Resonance Imaging data using 3D Convolutional Neural Networks
Figure 2 for Predicting brain-age from raw T 1 -weighted Magnetic Resonance Imaging data using 3D Convolutional Neural Networks
Figure 3 for Predicting brain-age from raw T 1 -weighted Magnetic Resonance Imaging data using 3D Convolutional Neural Networks
Figure 4 for Predicting brain-age from raw T 1 -weighted Magnetic Resonance Imaging data using 3D Convolutional Neural Networks

Age prediction based on Magnetic Resonance Imaging (MRI) data of the brain is a biomarker to quantify the progress of brain diseases and aging. Current approaches rely on preparing the data with multiple preprocessing steps, such as registering voxels to a standardized brain atlas, which yields a significant computational overhead, hampers widespread usage and results in the predicted brain-age to be sensitive to preprocessing parameters. Here we describe a 3D Convolutional Neural Network (CNN) based on the ResNet architecture being trained on raw, non-registered T$_ 1$-weighted MRI data of N=10,691 samples from the German National Cohort and additionally applied and validated in N=2,173 samples from three independent studies using transfer learning. For comparison, state-of-the-art models using preprocessed neuroimaging data are trained and validated on the same samples. The 3D CNN using raw neuroimaging data predicts age with a mean average deviation of 2.84 years, outperforming the state-of-the-art brain-age models using preprocessed data. Since our approach is invariant to preprocessing software and parameter choices, it enables faster, more robust and more accurate brain-age modeling.

Viaarxiv icon

Deep Shape Analysis on Abdominal Organs for Diabetes Prediction

Aug 06, 2018
Benjamin Gutierrez-Becker, Sergios Gatidis, Daniel Gutmann, Annette Peters, Christopher Schlett Fabian Bamberg, Christian Wachinger

Figure 1 for Deep Shape Analysis on Abdominal Organs for Diabetes Prediction
Figure 2 for Deep Shape Analysis on Abdominal Organs for Diabetes Prediction
Figure 3 for Deep Shape Analysis on Abdominal Organs for Diabetes Prediction
Figure 4 for Deep Shape Analysis on Abdominal Organs for Diabetes Prediction

Morphological analysis of organs based on images is a key task in medical imaging computing. Several approaches have been proposed for the quantitative assessment of morphological changes, and they have been widely used for the analysis of the effects of aging, disease and other factors in organ morphology. In this work, we propose a deep neural network for predicting diabetes on abdominal shapes. The network directly operates on raw point clouds without requiring mesh processing or shape alignment. Instead of relying on hand-crafted shape descriptors, an optimal representation is learned in the end-to-end training stage of the network. For comparison, we extend the state-of-the-art shape descriptor BrainPrint to the AbdomenPrint. Our results demonstrate that the network learns shape representations that better separates healthy and diabetic individuals than traditional representations.

* Accepted for publication at the ShapeMI MICCAI Workshop 2018 
Viaarxiv icon