Abstract:Exploring the trustworthiness of deep learning models is crucial, especially in critical domains such as medical imaging decision support systems. Conformal prediction has emerged as a rigorous means of providing deep learning models with reliable uncertainty estimates and safety guarantees. However, conformal prediction results face challenges due to the backbone model's struggles in domain-shifted scenarios, such as variations in different sources. To aim this challenge, this paper proposes a novel framework termed Conformal Ensemble of Vision Transformers (CE-ViTs) designed to enhance image classification performance by prioritizing domain adaptation and model robustness, while accounting for uncertainty. The proposed method leverages an ensemble of vision transformer models in the backbone, trained on diverse datasets including HAM10000, Dermofit, and Skin Cancer ISIC datasets. This ensemble learning approach, calibrated through the combined mentioned datasets, aims to enhance domain adaptation through conformal learning. Experimental results underscore that the framework achieves a high coverage rate of 90.38\%, representing an improvement of 9.95\% compared to the HAM10000 model. This indicates a strong likelihood that the prediction set includes the true label compared to singular models. Ensemble learning in CE-ViTs significantly improves conformal prediction performance, increasing the average prediction set size for challenging misclassified samples from 1.86 to 3.075.
Abstract:Precise estimation of the Remaining Useful Life (RUL) of rolling bearings is an important consideration to avoid unexpected failures, reduce downtime, and promote safety and efficiency in industrial systems. Complications in degradation trends, noise presence, and the necessity to detect faults in advance make estimation of RUL a challenging task. This paper introduces a novel framework that combines wavelet-based denoising method, Wavelet Packet Decomposition (WPD), and a customized multi-channel Swin Transformer model (MCSFormer) to address these problems. With attention mechanisms incorporated for feature fusion, the model is designed to learn global and local degradation patterns utilizing hierarchical representations for enhancing predictive performance. Additionally, a customized loss function is developed as a key distinction of this work to differentiate between early and late predictions, prioritizing accurate early detection and minimizing the high operation risks of late predictions. The proposed model was evaluated with the PRONOSTIA dataset using three experiments. Intra-condition experiments demonstrated that MCSFormer outperformed state-of-the-art models, including the Adaptive Transformer, MDAN, and CNN-SRU, achieving 41%, 64%, and 69% lower MAE on average across different operating conditions, respectively. In terms of cross-condition testing, it achieved superior generalization under varying operating conditions compared to the adapted ViT and Swin Transformer. Lastly, the custom loss function effectively reduced late predictions, as evidenced in a 6.3% improvement in the scoring metric while maintaining competitive overall performance. The model's robust noise resistance, generalization capability, and focus on safety make MCSFormer a trustworthy and effective predictive maintenance tool in industrial applications.