Alert button
Picture for Fadi Dornaika

Fadi Dornaika

Alert button

D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation

Mar 27, 2023
Fares Bougourzi, Cosimo Distante, Fadi Dornaika, Abdelmalik Taleb-Ahmed

Figure 1 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 2 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 3 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 4 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation

In the last three years, the world has been facing a global crisis caused by Covid-19 pandemic. Medical imaging has been playing a crucial role in the fighting against this disease and saving the human lives. Indeed, CT-scans has proved their efficiency in diagnosing, detecting, and following-up the Covid-19 infection. In this paper, we propose a new Transformer-CNN based approach for Covid-19 infection segmentation from the CT slices. The proposed D-TrAttUnet architecture has an Encoder-Decoder structure, where compound Transformer-CNN encoder and Dual-Decoders are proposed. The Transformer-CNN encoder is built using Transformer layers, UpResBlocks, ResBlocks and max-pooling layers. The Dual-Decoder consists of two identical CNN decoders with attention gates. The two decoders are used to segment the infection and the lung regions simultaneously and the losses of the two tasks are joined. The proposed D-TrAttUnet architecture is evaluated for both Binary and Multi-classes Covid-19 infection segmentation. The experimental results prove the efficiency of the proposed approach to deal with the complexity of Covid-19 segmentation task from limited data. Furthermore, D-TrAttUnet architecture outperforms three baseline CNN segmentation architectures (Unet, AttUnet and Unet++) and three state-of-the-art architectures (AnamNet, SCOATNet and CopleNet), in both Binary and Mutli-classes segmentation tasks.

Viaarxiv icon

Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images

Mar 18, 2023
Bouthaina Slika, Fadi Dornaika, Hamid Merdji, Karim Hammoudi

Figure 1 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 2 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 3 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 4 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images

To develop generic and reliable approaches for diagnosing and assessing the severity of COVID-19 from chest X-rays (CXR), a large number of well-maintained COVID-19 datasets are needed. Existing severity quantification architectures require expensive training calculations to achieve the best results. For healthcare professionals to quickly and automatically identify COVID-19 patients and predict associated severity indicators, computer utilities are needed. In this work, we propose a Vision Transformer (ViT)-based neural network model that relies on a small number of trainable parameters to quantify the severity of COVID-19 and other lung diseases. We present a feasible approach to quantify the severity of CXR, called Vision Transformer Regressor Infection Prediction (ViTReg-IP), derived from a ViT and a regression head. We investigate the generalization potential of our model using a variety of additional test chest radiograph datasets from different open sources. In this context, we performed a comparative study with several competing deep learning analysis methods. The experimental results show that our model can provide peak performance in quantifying severity with high generalizability at a relatively low computational cost. The source codes used in our work are publicly available at https://github.com/bouthainas/ViTReg-IP.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans

Mar 15, 2023
Fares Bougourzi, Fadi Dornaika, Amir Nakib, Cosimo Distante, Abdelmalik Taleb-Ahmed

Figure 1 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 2 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 3 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 4 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans

Since the appearance of Covid-19 in late 2019, Covid-19 has become an active research topic for the artificial intelligence (AI) community. One of the most interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging is the most informative tool about this disease. This work is part of the 3nd COV19D competition for Covid-19 Severity Prediction. In order to deal with the big gap between the validation and test results that were shown in the previous version of this competition, we proposed to combine the prediction of 2D and 3D CNN predictions. For the 2D CNN approach, we propose 2B-InceptResnet architecture which consists of two paths for segmented lungs and infection of all slices of the input CT-scan, respectively. Each path consists of ConvLayer and Inception-ResNet pretrained model on ImageNet. For the 3D CNN approach, we propose hybrid-DeCoVNet architecture which consists of four blocks: Stem, four 3D-ResNet layers, Classification Head and Decision layer. Our proposed approaches outperformed the baseline approach in the validation data of the 3nd COV19D competition for Covid-19 Severity Prediction by 36%.

* arXiv admin note: substantial text overlap with arXiv:2206.15431 
Viaarxiv icon

Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan

Jun 29, 2022
Fares Bougourzi, Cosimo Distante, Fadi Dornaika, Abdelmalik Taleb-Ahmed

Figure 1 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 2 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 3 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 4 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan

Since the appearance of Covid-19 in late 2019, Covid-19 has become an active research topic for the artificial intelligence (AI) community. One of the most interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging is the most informative tool about this disease. This work is part of the 2nd COV19D competition, where two challenges are set: Covid-19 Detection and Covid-19 Severity Detection from the CT-scans. For Covid-19 detection from CT-scans, we proposed an ensemble of 2D Convolution blocks with Densenet-161 models. Here, each 2D convolutional block with Densenet-161 architecture is trained separately and in testing phase, the ensemble model is based on the average of their probabilities. On the other hand, we proposed an ensemble of Convolutional Layers with Inception models for Covid-19 severity detection. In addition to the Convolutional Layers, three Inception variants were used, namely Inception-v3, Inception-v4 and Inception-Resnet. Our proposed approaches outperformed the baseline approach in the validation data of the 2nd COV19D competition by 11% and 16% for Covid-19 detection and Covid-19 severity detection, respectively.

Viaarxiv icon

SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation

Apr 11, 2022
Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi

Figure 1 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 2 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 3 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 4 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation

A novel approach of data augmentation based on irregular superpixel decomposition is proposed. This approach called SuperpixelGridMasks permits to extend original image datasets that are required by training stages of machine learning-related analysis architectures towards increasing their performances. Three variants named SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix are presented. These grid-based methods produce a new style of image transformations using the dropping and fusing of information. Extensive experiments using various image classification models and datasets show that baseline performances can be significantly outperformed using our methods. The comparative study also shows that our methods can overpass the performances of other data augmentations. Experimental results obtained over image recognition datasets of varied natures show the efficiency of these new methods. SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix codes are publicly available at https://github.com/hammoudiproject/SuperpixelGridMasks

* The project is available at https://github.com/hammoudiproject/SuperpixelGridMasks 
Viaarxiv icon

Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19

Apr 05, 2020
Karim Hammoudi, Halim Benhabiles, Mahmoud Melkemi, Fadi Dornaika, Ignacio Arganda-Carreras, Dominique Collard, Arnaud Scherpereel

Figure 1 for Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19
Figure 2 for Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19
Figure 3 for Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19
Figure 4 for Deep Learning on Chest X-ray Images to Detect and Evaluate Pneumonia Cases at the Era of COVID-19

Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. COVID-19 appeared first in China and very quickly spreads to the rest of the world, causing then the 2019-20 coronavirus pandemic. In many cases, this disease causes pneumonia. Since pulmonary infections can be observed through radiography images, this paper investigates deep learning methods for automatically analyzing query chest X-ray images with the hope to bring precision tools to health professionals towards screening the COVID-19 and diagnosing confirmed patients. In this context, training datasets, deep learning architectures and analysis strategies have been experimented from publicly open sets of chest X-ray images. Tailored deep learning models are proposed to detect pneumonia infection cases, notably viral cases. It is assumed that viral pneumonia cases detected during an epidemic COVID-19 context have a high probability to presume COVID-19 infections. Moreover, easy-to-apply health indicators are proposed for estimating infection status and predicting patient status from the detected pneumonia cases. Experimental results show possibilities of training deep learning models over publicly open sets of chest X-ray images towards screening viral pneumonia. Chest X-ray test images of COVID-19 infected patients are successfully diagnosed through detection models retained for their performances. The efficiency of proposed health indicators is highlighted through simulated scenarios of patients presenting infections and health problems by combining real and synthetic health data.

* 6 pages 
Viaarxiv icon

Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems

Nov 25, 2014
Karim Hammoudi, Nabil Ajam, Mohamed Kasraoui, Fadi Dornaika, Karan Radhakrishnan, Karthik Bandi, Qing Cai, Sai Liu

Figure 1 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 2 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 3 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 4 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems

In this paper, we propose a design for novel and experimental cloud computing systems. The proposed system aims at enhancing computational, communicational and annalistic capabilities of road navigation services by merging several independent technologies, namely vision-based embedded navigation systems, prominent Cloud Computing Systems (CCSs) and Vehicular Ad-hoc NETwork (VANET). This work presents our initial investigations by describing the design of a global generic system. The designed system has been experimented with various scenarios of video-based road services. Moreover, the associated architecture has been implemented on a small-scale simulator of an in-vehicle embedded system. The implemented architecture has been experimented in the case of a simulated road service to aid the police agency. The goal of this service is to recognize and track searched individuals and vehicles in a real-time monitoring system remotely connected to moving cars. The presented work demonstrates the potential of our system for efficiently enhancing and diversifying real-time video services in road environments.

* paper accepted for publication in the proceedings of the "17\`eme Colloque Compression et Repr\'esentation des Signaux Audiovisuels" (CORESA), 5p., Reims, France, 2014. (preprint) 
Viaarxiv icon

A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data

Aug 29, 2013
Karim Hammoudi, Fadi Dornaika, Bahman Soheilian, Bruno Vallet, John McDonald, Nicolas Paparoditis

Figure 1 for A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
Figure 2 for A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
Figure 3 for A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data
Figure 4 for A Synergistic Approach for Recovering Occlusion-Free Textured 3D Maps of Urban Facades from Heterogeneous Cartographic Data

In this paper we present a practical approach for generating an occlusion-free textured 3D map of urban facades by the synergistic use of terrestrial images, 3D point clouds and area-based information. Particularly in dense urban environments, the high presence of urban objects in front of the facades causes significant difficulties for several stages in computational building modeling. Major challenges lie on the one hand in extracting complete 3D facade quadrilateral delimitations and on the other hand in generating occlusion-free facade textures. For these reasons, we describe a straightforward approach for completing and recovering facade geometry and textures by exploiting the data complementarity of terrestrial multi-source imagery and area-based information.

* International Journal of Advanced Robotic Systems, vol. 10, 10p., 2013  
Viaarxiv icon