Alert button
Picture for Ole Solheim

Ole Solheim

Alert button

Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks

Apr 18, 2023
Ragnhild Holden Helland, Alexandros Ferles, André Pedersen, Ivar Kommers, Hilko Ardon, Frederik Barkhof, Lorenzo Bello, Mitchel S. Berger, Tora Dunås, Marco Conti Nibali, Julia Furtner, Shawn Hervey-Jumper, Albert J. S. Idema, Barbara Kiesel, Rishi Nandoe Tewari, Emmanuel Mandonnet, Domenique M. J. Müller, Pierre A. Robe, Marco Rossi, Lisa M. Sagberg, Tommaso Sciortino, Tom Aalders, Michiel Wagemakers, Georg Widhalm, Marnix G. Witte, Aeilko H. Zwinderman, Paulina L. Majewska, Asgeir S. Jakola, Ole Solheim, Philip C. De Witt Hamer, Ingerid Reinertsen, Roelant S. Eijgelaar, David Bouget

Figure 1 for Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks
Figure 2 for Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks
Figure 3 for Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks
Figure 4 for Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks

Extent of resection after surgery is one of the main prognostic factors for patients diagnosed with glioblastoma. To achieve this, accurate segmentation and classification of residual tumor from post-operative MR images is essential. The current standard method for estimating it is subject to high inter- and intra-rater variability, and an automated method for segmentation of residual tumor in early post-operative MRI could lead to a more accurate estimation of extent of resection. In this study, two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task. The models were extensively validated on a multicenter dataset with nearly 1000 patients, from 12 hospitals in Europe and the United States. The best performance achieved was a 61\% Dice score, and the best classification performance was about 80\% balanced accuracy, with a demonstrated ability to generalize across hospitals. In addition, the segmentation performance of the best models was on par with human expert raters. The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection.

* 13 pages, 4 figures, 4 tables 
Viaarxiv icon

RESECT-SEG: Open access annotations of intra-operative brain tumor ultrasound images

Jul 13, 2022
Bahareh Behboodi, Francois-Xavier Carton, Matthieu Chabanas, Sandrine De Ribaupierre, Ole Solheim, Bodil K. R. Munkvold, Hassan Rivaz, Yiming Xiao, Ingerid Reinertsen

Figure 1 for RESECT-SEG: Open access annotations of intra-operative brain tumor ultrasound images
Figure 2 for RESECT-SEG: Open access annotations of intra-operative brain tumor ultrasound images

Purpose: Registration and segmentation of magnetic resonance (MR) and ultrasound (US) images play an essential role in surgical planning and resection of brain tumors. However, validating these techniques is challenging due to the scarcity of publicly accessible sources with high-quality ground truth information. To this end, we propose a unique annotation dataset of tumor tissues and resection cavities from the previously published RESECT dataset (Xiao et al. 2017) to encourage a more rigorous assessments of image processing techniques. Acquisition and validation methods: The RESECT database consists of MR and intraoperative US (iUS) images of 23 patients who underwent resection surgeries. The proposed dataset contains tumor tissues and resection cavity annotations of the iUS images. The quality of annotations were validated by two highly experienced neurosurgeons through several assessment criteria. Data format and availability: Annotations of tumor tissues and resection cavities are provided in 3D NIFTI formats. Both sets of annotations are accessible online in the \url{https://osf.io/6y4db}. Discussion and potential applications: The proposed database includes tumor tissue and resection cavity annotations from real-world clinical ultrasound brain images to evaluate segmentation and registration methods. These labels could also be used to train deep learning approaches. Eventually, this dataset should further improve the quality of image guidance in neurosurgery.

* Bahareh Behboodi and Francois-Xavier Carton share the first authorship 
Viaarxiv icon

Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms

Jan 19, 2021
David Bouget, André Pedersen, Sayied Abdol Mohieb Hosainey, Ole Solheim, Ingerid Reinertsen

Figure 1 for Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms
Figure 2 for Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms
Figure 3 for Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms
Figure 4 for Meningioma segmentation in T1-weighted MRI leveraging global context and attention mechanisms

Meningiomas are the most common type of primary brain tumor, accounting for approximately 30% of all brain tumors. A substantial number of these tumors are never surgically removed but rather monitored over time. Automatic and precise meningioma segmentation is therefore beneficial to enable reliable growth estimation and patient-specific treatment planning. In this study, we propose the inclusion of attention mechanisms over a U-Net architecture: (i) Attention-gated U-Net (AGUNet) and (ii) Dual Attention U-Net (DAUNet), using a 3D MRI volume as input. Attention has the potential to leverage the global context and identify features' relationships across the entire volume. To limit spatial resolution degradation and loss of detail inherent to encoder-decoder architectures, we studied the impact of multi-scale input and deep supervision components. The proposed architectures are trainable end-to-end and each concept can be seamlessly disabled for ablation studies. The validation studies were performed using a 5-fold cross validation over 600 T1-weighted MRI volumes from St. Olavs University Hospital, Trondheim, Norway. For the best performing architecture, an average Dice score of 81.6% was reached for an F1-score of 95.6%. With an almost perfect precision of 98%, meningiomas smaller than 3ml were occasionally missed hence reaching an overall recall of 93%. Leveraging global context from a 3D MRI volume provided the best performances, even if the native volume resolution could not be processed directly. Overall, near-perfect detection was achieved for meningiomas larger than 3ml which is relevant for clinical use. In the future, the use of multi-scale designs and refinement networks should be further investigated to improve the performance. A larger number of cases with meningiomas below 3ml might also be needed to improve the performance for the smallest tumors.

* 16 pages, 5 figures, 3 tables. Submitted to Artificial Intelligence in Medicine 
Viaarxiv icon

Fast meningioma segmentation in T1-weighted MRI volumes using a lightweight 3D deep learning architecture

Oct 14, 2020
David Bouget, André Pedersen, Sayied Abdol Mohieb Hosainey, Johanna Vanel, Ole Solheim, Ingerid Reinertsen

Figure 1 for Fast meningioma segmentation in T1-weighted MRI volumes using a lightweight 3D deep learning architecture
Figure 2 for Fast meningioma segmentation in T1-weighted MRI volumes using a lightweight 3D deep learning architecture
Figure 3 for Fast meningioma segmentation in T1-weighted MRI volumes using a lightweight 3D deep learning architecture
Figure 4 for Fast meningioma segmentation in T1-weighted MRI volumes using a lightweight 3D deep learning architecture

Automatic and consistent meningioma segmentation in T1-weighted MRI volumes and corresponding volumetric assessment is of use for diagnosis, treatment planning, and tumor growth evaluation. In this paper, we optimized the segmentation and processing speed performances using a large number of both surgically treated meningiomas and untreated meningiomas followed at the outpatient clinic. We studied two different 3D neural network architectures: (i) a simple encoder-decoder similar to a 3D U-Net, and (ii) a lightweight multi-scale architecture (PLS-Net). In addition, we studied the impact of different training schemes. For the validation studies, we used 698 T1-weighted MR volumes from St. Olav University Hospital, Trondheim, Norway. The models were evaluated in terms of detection accuracy, segmentation accuracy and training/inference speed. While both architectures reached a similar Dice score of 70% on average, the PLS-Net was more accurate with an F1-score of up to 88%. The highest accuracy was achieved for the largest meningiomas. Speed-wise, the PLS-Net architecture tended to converge in about 50 hours while 130 hours were necessary for U-Net. Inference with PLS-Net takes less than a second on GPU and about 15 seconds on CPU. Overall, with the use of mixed precision training, it was possible to train competitive segmentation models in a relatively short amount of time using the lightweight PLS-Net architecture. In the future, the focus should be brought toward the segmentation of small meningiomas (less than 2ml) to improve clinical relevance for automatic and early diagnosis as well as speed of growth estimates.

* 15 pages, 7 figures, submitted to SPIE journal of Medical Imaging 
Viaarxiv icon