Alert button
Picture for Gopal Kotecha

Gopal Kotecha

Alert button

DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning

Jul 04, 2019
Jen-Tang Lu, Rupert Brooks, Stefan Hahn, Jin Chen, Varun Buch, Gopal Kotecha, Katherine P. Andriole, Brian Ghoshhajra, Joel Pinto, Paul Vozila, Mark Michalski, Neil A. Tenenholtz

Figure 1 for DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning
Figure 2 for DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning
Figure 3 for DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning
Figure 4 for DeepAAA: clinically applicable and generalizable detection of abdominal aortic aneurysm using deep learning

We propose a deep learning-based technique for detection and quantification of abdominal aortic aneurysms (AAAs). The condition, which leads to more than 10,000 deaths per year in the United States, is asymptomatic, often detected incidentally, and often missed by radiologists. Our model architecture is a modified 3D U-Net combined with ellipse fitting that performs aorta segmentation and AAA detection. The study uses 321 abdominal-pelvic CT examinations performed by Massachusetts General Hospital Department of Radiology for training and validation. The model is then further tested for generalizability on a separate set of 57 examinations with differing patient demographics and acquisition characteristics than the original dataset. DeepAAA achieves high performance on both sets of data (sensitivity/specificity 0.91/0.95 and 0.85 / 1.0 respectively), on contrast and non-contrast CT scans and works with image volumes with varying numbers of images. We find that DeepAAA exceeds literature-reported performance of radiologists on incidental AAA detection. It is expected that the model can serve as an effective background detector in routine CT examinations to prevent incidental AAAs from being missed.

* Accepted for publication at MICCAI 2019 
Viaarxiv icon

Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks

Aug 11, 2018
Christopher P. Bridge, Michael Rosenthal, Bradley Wright, Gopal Kotecha, Florian Fintelmann, Fabian Troschel, Nityanand Miskin, Khanant Desai, William Wrobel, Ana Babic, Natalia Khalaf, Lauren Brais, Marisa Welch, Caitlin Zellers, Neil Tenenholtz, Mark Michalski, Brian Wolpin, Katherine Andriole

Figure 1 for Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks
Figure 2 for Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks
Figure 3 for Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks
Figure 4 for Fully-Automated Analysis of Body Composition from CT in Cancer Patients Using Convolutional Neural Networks

The amounts of muscle and fat in a person's body, known as body composition, are correlated with cancer risks, cancer survival, and cardiovascular risk. The current gold standard for measuring body composition requires time-consuming manual segmentation of CT images by an expert reader. In this work, we describe a two-step process to fully automate the analysis of CT body composition using a DenseNet to select the CT slice and U-Net to perform segmentation. We train and test our methods on independent cohorts. Our results show Dice scores (0.95-0.98) and correlation coefficients (R=0.99) that are favorable compared to human readers. These results suggest that fully automated body composition analysis is feasible, which could enable both clinical use and large-scale population studies.

Viaarxiv icon