Abstract:Accurate segmentation of abdominal adipose tissue, including subcutaneous (SAT) and visceral adipose tissue (VAT), along with liver segmentation, is essential for understanding body composition and associated health risks such as type 2 diabetes and cardiovascular disease. This study proposes Attention GhostUNet++, a novel deep learning model incorporating Channel, Spatial, and Depth Attention mechanisms into the Ghost UNet++ bottleneck for automated, precise segmentation. Evaluated on the AATTCT-IDS and LiTS datasets, the model achieved Dice coefficients of 0.9430 for VAT, 0.9639 for SAT, and 0.9652 for liver segmentation, surpassing baseline models. Despite minor limitations in boundary detail segmentation, the proposed model significantly enhances feature refinement, contextual understanding, and computational efficiency, offering a robust solution for body composition analysis. The implementation of the proposed Attention GhostUNet++ model is available at:https://github.com/MansoorHayat777/Attention-GhostUNetPlusPlus.
Abstract:Lately, the continuous development of deep learning models by many researchers in the area of computer vision has attracted more researchers to further improve the accuracy of these models. FasterRCNN [32] has already provided a state-of-the-art approach to improve the accuracy and detection of 80 different objects given in the COCO dataset. To further improve the performance of person detection we have conducted a different approach which gives the state-of-the-art conclusion. An ROI is a step in FasterRCNN that extract the features from the given image with a fixed size and transfer into for further classification. To enhance the ROI performance, we have conducted an approach that implements dense pooling and converts the image into a 3D model to further transform into UV(ultra Violet) images which makes it easy to extract the right features from the images. To implement our approach we have approached the state-of-the-art COCO datasets and extracted 6982 images that include a person object and our final achievements conclude that using our approach has made significant results in detecting the person object in the given image