Alert button
Picture for Bouthaina Slika

Bouthaina Slika

Alert button

Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images

Mar 18, 2023
Bouthaina Slika, Fadi Dornaika, Hamid Merdji, Karim Hammoudi

Figure 1 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 2 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 3 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images
Figure 4 for Vision Transformer-based Model for Severity Quantification of Lung Pneumonia Using Chest X-ray Images

To develop generic and reliable approaches for diagnosing and assessing the severity of COVID-19 from chest X-rays (CXR), a large number of well-maintained COVID-19 datasets are needed. Existing severity quantification architectures require expensive training calculations to achieve the best results. For healthcare professionals to quickly and automatically identify COVID-19 patients and predict associated severity indicators, computer utilities are needed. In this work, we propose a Vision Transformer (ViT)-based neural network model that relies on a small number of trainable parameters to quantify the severity of COVID-19 and other lung diseases. We present a feasible approach to quantify the severity of CXR, called Vision Transformer Regressor Infection Prediction (ViTReg-IP), derived from a ViT and a regression head. We investigate the generalization potential of our model using a variety of additional test chest radiograph datasets from different open sources. In this context, we performed a comparative study with several competing deep learning analysis methods. The experimental results show that our model can provide peak performance in quantifying severity with high generalizability at a relatively low computational cost. The source codes used in our work are publicly available at https://github.com/bouthainas/ViTReg-IP.

* This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible 
Viaarxiv icon

SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation

Apr 11, 2022
Karim Hammoudi, Adnane Cabani, Bouthaina Slika, Halim Benhabiles, Fadi Dornaika, Mahmoud Melkemi

Figure 1 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 2 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 3 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation
Figure 4 for SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix Data Augmentation

A novel approach of data augmentation based on irregular superpixel decomposition is proposed. This approach called SuperpixelGridMasks permits to extend original image datasets that are required by training stages of machine learning-related analysis architectures towards increasing their performances. Three variants named SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix are presented. These grid-based methods produce a new style of image transformations using the dropping and fusing of information. Extensive experiments using various image classification models and datasets show that baseline performances can be significantly outperformed using our methods. The comparative study also shows that our methods can overpass the performances of other data augmentations. Experimental results obtained over image recognition datasets of varied natures show the efficiency of these new methods. SuperpixelGridCut, SuperpixelGridMean and SuperpixelGridMix codes are publicly available at https://github.com/hammoudiproject/SuperpixelGridMasks

* The project is available at https://github.com/hammoudiproject/SuperpixelGridMasks 
Viaarxiv icon