Alert button
Picture for Fares Bougourzi

Fares Bougourzi

Alert button

Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review

Apr 26, 2023
Mohamed Fadhlallah Guerri, Cosimo Distante, Paolo Spagnolo, Fares Bougourzi, Abdelmalik Taleb-Ahmed

Figure 1 for Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Figure 2 for Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Figure 3 for Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review
Figure 4 for Deep Learning Techniques for Hyperspectral Image Analysis in Agriculture: A Review

In the recent years, hyperspectral imaging (HSI) has gained considerably popularity among computer vision researchers for its potential in solving remote sensing problems, especially in agriculture field. However, HSI classification is a complex task due to the high redundancy of spectral bands, limited training samples, and non-linear relationship between spatial position and spectral bands. Fortunately, deep learning techniques have shown promising results in HSI analysis. This literature review explores recent applications of deep learning approaches such as Autoencoders, Convolutional Neural Networks (1D, 2D, and 3D), Recurrent Neural Networks, Deep Belief Networks, and Generative Adversarial Networks in agriculture. The performance of these approaches has been evaluated and discussed on well-known land cover datasets including Indian Pines, Salinas Valley, and Pavia University.

Viaarxiv icon

D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation

Mar 27, 2023
Fares Bougourzi, Cosimo Distante, Fadi Dornaika, Abdelmalik Taleb-Ahmed

Figure 1 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 2 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 3 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation
Figure 4 for D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture for Binary and Multi-classes Covid-19 Infection Segmentation

In the last three years, the world has been facing a global crisis caused by Covid-19 pandemic. Medical imaging has been playing a crucial role in the fighting against this disease and saving the human lives. Indeed, CT-scans has proved their efficiency in diagnosing, detecting, and following-up the Covid-19 infection. In this paper, we propose a new Transformer-CNN based approach for Covid-19 infection segmentation from the CT slices. The proposed D-TrAttUnet architecture has an Encoder-Decoder structure, where compound Transformer-CNN encoder and Dual-Decoders are proposed. The Transformer-CNN encoder is built using Transformer layers, UpResBlocks, ResBlocks and max-pooling layers. The Dual-Decoder consists of two identical CNN decoders with attention gates. The two decoders are used to segment the infection and the lung regions simultaneously and the losses of the two tasks are joined. The proposed D-TrAttUnet architecture is evaluated for both Binary and Multi-classes Covid-19 infection segmentation. The experimental results prove the efficiency of the proposed approach to deal with the complexity of Covid-19 segmentation task from limited data. Furthermore, D-TrAttUnet architecture outperforms three baseline CNN segmentation architectures (Unet, AttUnet and Unet++) and three state-of-the-art architectures (AnamNet, SCOATNet and CopleNet), in both Binary and Mutli-classes segmentation tasks.

Viaarxiv icon

2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans

Mar 15, 2023
Fares Bougourzi, Fadi Dornaika, Amir Nakib, Cosimo Distante, Abdelmalik Taleb-Ahmed

Figure 1 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 2 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 3 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans
Figure 4 for 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction from 3D CT-Scans

Since the appearance of Covid-19 in late 2019, Covid-19 has become an active research topic for the artificial intelligence (AI) community. One of the most interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging is the most informative tool about this disease. This work is part of the 3nd COV19D competition for Covid-19 Severity Prediction. In order to deal with the big gap between the validation and test results that were shown in the previous version of this competition, we proposed to combine the prediction of 2D and 3D CNN predictions. For the 2D CNN approach, we propose 2B-InceptResnet architecture which consists of two paths for segmented lungs and infection of all slices of the input CT-scan, respectively. Each path consists of ConvLayer and Inception-ResNet pretrained model on ImageNet. For the 3D CNN approach, we propose hybrid-DeCoVNet architecture which consists of four blocks: Stem, four 3D-ResNet layers, Classification Head and Decision layer. Our proposed approaches outperformed the baseline approach in the validation data of the 3nd COV19D competition for Covid-19 Severity Prediction by 36%.

* arXiv admin note: substantial text overlap with arXiv:2206.15431 
Viaarxiv icon

Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan

Jun 29, 2022
Fares Bougourzi, Cosimo Distante, Fadi Dornaika, Abdelmalik Taleb-Ahmed

Figure 1 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 2 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 3 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan
Figure 4 for Ensemble CNN models for Covid-19 Recognition and Severity Perdition From 3D CT-scan

Since the appearance of Covid-19 in late 2019, Covid-19 has become an active research topic for the artificial intelligence (AI) community. One of the most interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging is the most informative tool about this disease. This work is part of the 2nd COV19D competition, where two challenges are set: Covid-19 Detection and Covid-19 Severity Detection from the CT-scans. For Covid-19 detection from CT-scans, we proposed an ensemble of 2D Convolution blocks with Densenet-161 models. Here, each 2D convolutional block with Densenet-161 architecture is trained separately and in testing phase, the ensemble model is based on the average of their probabilities. On the other hand, we proposed an ensemble of Convolutional Layers with Inception models for Covid-19 severity detection. In addition to the Convolutional Layers, three Inception variants were used, namely Inception-v3, Inception-v4 and Inception-Resnet. Our proposed approaches outperformed the baseline approach in the validation data of the 2nd COV19D competition by 11% and 16% for Covid-19 detection and Covid-19 severity detection, respectively.

Viaarxiv icon