Abstract:BACKGROUND: Atrial fibrillation (AF), the most common arrhythmia, is linked to high morbidity and mortality. In a fast-evolving AF rhythm control treatment era, predicting AF recurrence after its onset may be crucial to achieve the optimal therapeutic approach, yet traditional scores like CHADS2-VASc, HATCH, and APPLE show limited predictive accuracy. Moreover, early diagnosis studies often rely on codified electronic health record (EHR) data, which may contain errors and missing information. OBJECTIVE: This study aims to predict AF recurrence between one month and two years after onset by evaluating traditional clinical scores, ML models, and our LTM approach. Moreover, another objective is to develop a methodology for integrating structured and unstructured data to enhance tabular dataset quality. METHODS: A tabular dataset was generated by combining structured clinical data with free-text discharge reports processed through natural language processing techniques, reducing errors and annotation effort. A total of 1,508 patients with documented AF onset were identified, and models were evaluated on a manually annotated test set. The proposed approach includes a LTM compared against traditional clinical scores and ML models. RESULTS: The proposed LTM approach achieved the highest predictive performance, surpassing both traditional clinical scores and ML models. Additionally, the gender and age bias analyses revealed demographic disparities. CONCLUSION: The integration of structured data and free-text sources resulted in a high-quality dataset. The findings emphasize the limitations of traditional clinical scores in predicting AF recurrence and highlight the potential of ML-based approaches, particularly our LTM model.
Abstract:Research on language technology for the development of medical applications is currently a hot topic in Natural Language Understanding and Generation. Thus, a number of large language models (LLMs) have recently been adapted to the medical domain, so that they can be used as a tool for mediating in human-AI interaction. While these LLMs display competitive performance on automated medical texts benchmarks, they have been pre-trained and evaluated with a focus on a single language (English mostly). This is particularly true of text-to-text models, which typically require large amounts of domain-specific pre-training data, often not easily accessible for many languages. In this paper, we address these shortcomings by compiling, to the best of our knowledge, the largest multilingual corpus for the medical domain in four languages, namely English, French, Italian and Spanish. This new corpus has been used to train Medical mT5, the first open-source text-to-text multilingual model for the medical domain. Additionally, we present two new evaluation benchmarks for all four languages with the aim of facilitating multilingual research in this domain. A comprehensive evaluation shows that Medical mT5 outperforms both encoders and similarly sized text-to-text models for the Spanish, French, and Italian benchmarks, while being competitive with current state-of-the-art LLMs in English.