Alert button
Picture for Yuncheng Zhou

Yuncheng Zhou

Alert button

Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge

Aug 10, 2021
Alain Lalande, Zhihao Chen, Thibaut Pommier, Thomas Decourselle, Abdul Qayyum, Michel Salomon, Dominique Ginhac, Youssef Skandarani, Arnaud Boucher, Khawla Brahim, Marleen de Bruijne, Robin Camarasa, Teresa M. Correia, Xue Feng, Kibrom B. Girum, Anja Hennemuth, Markus Huellebrand, Raabid Hussain, Matthias Ivantsits, Jun Ma, Craig Meyer, Rishabh Sharma, Jixi Shi, Nikolaos V. Tsekos, Marta Varela, Xiyue Wang, Sen Yang, Hannu Zhang, Yichi Zhang, Yuncheng Zhou, Xiahai Zhuang, Raphael Couturier, Fabrice Meriaudeau

Figure 1 for Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge
Figure 2 for Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge
Figure 3 for Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge
Figure 4 for Deep Learning methods for automatic evaluation of delayed enhancement-MRI. The results of the EMIDEC challenge

A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed several minutes after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC challenge that focused on this task are presented in this paper. The challenge's main objectives were twofold. First, to evaluate if deep learning methods can distinguish between normal and pathological cases. Second, to automatically calculate the extent of myocardial infarction. The publicly available database consists of 150 exams divided into 50 cases with normal MRI after injection of a contrast agent and 100 cases with myocardial infarction (and then with a hyperenhanced area on DE-MRI), whatever their inclusion in the cardiac emergency department. Along with MRI, clinical characteristics are also provided. The obtained results issued from several works show that the automatic classification of an exam is a reachable task (the best method providing an accuracy of 0.92), and the automatic segmentation of the myocardium is possible. However, the segmentation of the diseased area needs to be improved, mainly due to the small size of these areas and the lack of contrast with the surrounding structures.

* Submitted to Medical Image Analysis 
Viaarxiv icon

Anatomy Prior Based U-net for Pathology Segmentation with Attention

Nov 17, 2020
Yuncheng Zhou, Ke Zhang, Xinzhe Luo, Sihan Wang, Xiahai Zhuang

Figure 1 for Anatomy Prior Based U-net for Pathology Segmentation with Attention
Figure 2 for Anatomy Prior Based U-net for Pathology Segmentation with Attention
Figure 3 for Anatomy Prior Based U-net for Pathology Segmentation with Attention
Figure 4 for Anatomy Prior Based U-net for Pathology Segmentation with Attention

Pathological area segmentation in cardiac magnetic resonance (MR) images plays a vital role in the clinical diagnosis of cardiovascular diseases. Because of the irregular shape and small area, pathological segmentation has always been a challenging task. We propose an anatomy prior based framework, which combines the U-net segmentation network with the attention technique. Leveraging the fact that the pathology is inclusive, we propose a neighborhood penalty strategy to gauge the inclusion relationship between the myocardium and the myocardial infarction and no-reflow areas. This neighborhood penalty strategy can be applied to any two labels with inclusive relationships (such as the whole infarction and myocardium, etc.) to form a neighboring loss. The proposed framework is evaluated on the EMIDEC dataset. Results show that our framework is effective in pathological area segmentation.

* 8 pages, 3 figures, to be published in STACOM 2020 (MICCAI Workshop) 
Viaarxiv icon

Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network

Feb 26, 2019
Yechong Huang, Jiahang Xu, Yuncheng Zhou, Tong Tong, Xiahai Zhuang, the Alzheimer's Disease Neuroimaging Initiative

Figure 1 for Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network
Figure 2 for Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network
Figure 3 for Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network
Figure 4 for Diagnosis of Alzheimer's Disease via Multi-modality 3D Convolutional Neural Network

Alzheimer's Disease (AD) is one of the most concerned neurodegenerative diseases. In the last decade, studies on AD diagnosis attached great significance to artificial intelligence (AI)-based diagnostic algorithms. Among the diverse modality imaging data, T1-weighted MRI and 18F-FDGPET are widely researched for this task. In this paper, we propose a novel convolutional neural network (CNN) to fuse the multi-modality information including T1-MRI and FDG-PDT images around the hippocampal area for the diagnosis of AD. Different from the traditional machine learning algorithms, this method does not require manually extracted features, and utilizes the stateof-art 3D image-processing CNNs to learn features for the diagnosis and prognosis of AD. To validate the performance of the proposed network, we trained the classifier with paired T1-MRI and FDG-PET images using the ADNI datasets, including 731 Normal (NL) subjects, 647 AD subjects, 441 stable MCI (sMCI) subjects and 326 progressive MCI (pMCI) subjects. We obtained the maximal accuracies of 90.10% for NL/AD task, 87.46% for NL/pMCI task, and 76.90% for sMCI/pMCI task. The proposed framework yields comparative results against state-of-the-art approaches. Moreover, the experimental results have demonstrated that (1) segmentation is not a prerequisite by using CNN, (2) the hippocampal area provides enough information to give a reference to AD diagnosis. Keywords: Alzheimer's Disease, Multi-modality, Image Classification, CNN, Deep Learning, Hippocampal

* 21 pages, 5 figures, 9 tables 
Viaarxiv icon