Abstract:Breast-conserving surgery (BCS) aims to completely remove malignant lesions while maximizing healthy tissue preservation. Intraoperative margin assessment is essential to achieve a balance between thorough cancer resection and tissue conservation. A deep ultraviolet fluorescence scanning microscope (DUV-FSM) enables rapid acquisition of whole surface images (WSIs) for excised tissue, providing contrast between malignant and normal tissues. However, breast cancer classification with DUV WSIs is challenged by high resolutions and complex histopathological features. This study introduces a DUV WSI classification framework using a patch-level vision transformer (ViT) model, capturing local and global features. Grad-CAM++ saliency weighting highlights relevant spatial regions, enhances result interpretability, and improves diagnostic accuracy for benign and malignant tissue classification. A comprehensive 5-fold cross-validation demonstrates the proposed approach significantly outperforms conventional deep learning methods, achieving a classification accuracy of 98.33%.
Abstract:Unmanned aerial vehicles (UAVs) offer a flexible and cost-effective solution for wildfire monitoring. However, their widespread deployment during wildfires has been hindered by a lack of operational guidelines and concerns about potential interference with aircraft systems. Consequently, the progress in developing deep-learning models for wildfire detection and characterization using aerial images is constrained by the limited availability, size, and quality of existing datasets. This paper introduces a solution aimed at enhancing the quality of current aerial wildfire datasets to align with advancements in camera technology. The proposed approach offers a solution to create a comprehensive, standardized large-scale image dataset. This paper presents a pipeline based on CycleGAN to enhance wildfire datasets and a novel fusion method that integrates paired RGB images as attribute conditioning in the generators of both directions, improving the accuracy of the generated images.