Medical records created by healthcare professionals upon patient admission are rich in details critical for diagnosis. Yet, their potential is not fully realized because of obstacles such as complex medical language, inadequate comprehension of medical numerical data by state-of-the-art Large Language Models (LLMs), and the limitations imposed by small annotated training datasets. This research aims to classify numerical values extracted from medical documents across seven distinct physiological categories, employing CamemBERT-bio. Previous studies suggested that transformer-based models might not perform as well as traditional NLP models in such tasks. To enhance CamemBERT-bio's performances, we introduce two main innovations: integrating keyword embeddings into the model and adopting a number-agnostic strategy by excluding all numerical data from the text. The implementation of label embedding techniques refines the attention mechanisms, while the technique of using a `numerical-blind' dataset aims to bolster context-centric learning. Another key component of our research is determining the criticality of extracted numerical data. To achieve this, we utilized a simple approach that involves verifying if the value falls within the established standard ranges Our findings are encouraging, showing substantial improvements in the effectiveness of CamemBERT-bio, surpassing conventional methods with an F1 score of 0.89. This represents an over 20\% increase over the 0.73 $F_1$ score of traditional approaches and an over 9\% increase over the 0.82 $F_1$ score of state-of-the-art approaches.
High-entropy alloys (HEAs) stand out between multi-component alloys due to their attractive microstructures and mechanical properties. In this investigation, molecular dynamics (MD) simulation and machine learning were used to ascertain the deformation mechanism of AlCoCuCrFeNi HEAs under the influence of temperature, strain rate, and grain sizes. First, the MD simulation shows that the yield stress decreases significantly as the strain and temperature increase. In other cases, changes in strain rate and grain size have less effect on mechanical properties than changes in strain and temperature. The alloys exhibited superplastic behavior under all test conditions. The deformity mechanism discloses that strain and temperature are the main sources of beginning strain, and the shear bands move along the uniaxial tensile axis inside the workpiece. Furthermore, the fast phase shift of inclusion under mild strain indicates the relative instability of the inclusion phase of HCP. Ultimately, the dislocation evolution mechanism shows that the dislocations are transported to free surfaces under increased strain when they nucleate around the grain boundary. Surprisingly, the ML prediction results also confirm the same characteristics as those confirmed from the MD simulation. Hence, the combination of MD and ML reinforces the confidence in the findings of mechanical characteristics of HEA. Consequently, this combination fills the gaps between MD and ML, which can significantly save time human power and cost to conduct real experiments for testing HEA deformation in practice.
Recent research at CHU Sainte Justine's Pediatric Critical Care Unit (PICU) has revealed that traditional machine learning methods, such as semi-supervised label propagation and K-nearest neighbors, outperform Transformer-based models in artifact detection from PPG signals, mainly when data is limited. This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from these data, followed by fine-tuning on labeled data. Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations, improving its robustness in artifact classification tasks. Among various SSL techniques, including masking, contrastive learning, and DINO (self-distillation with no labels)-contrastive learning exhibited the most stable and superior performance in small PPG datasets. Further, we delve into optimizing contrastive loss functions, which are crucial for contrastive SSL. Inspired by InfoNCE, we introduce a novel contrastive loss function that facilitates smoother training and better convergence, thereby enhancing performance in artifact classification. In summary, this study establishes the efficacy of SSL in leveraging unlabeled data, particularly in enhancing the capabilities of the Transformer model. This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
Photoplethysmogram (PPG) signals are widely used in healthcare for monitoring vital signs, but they are susceptible to motion artifacts that can lead to inaccurate interpretations. In this study, the use of label propagation techniques to propagate labels among PPG samples is explored, particularly in imbalanced class scenarios where clean PPG samples are significantly outnumbered by artifact-contaminated samples. With a precision of 91%, a recall of 90% and an F1 score of 90% for the class without artifacts, the results demonstrate its effectiveness in labeling a medical dataset, even when clean samples are rare. For the classification of artifacts our study compares supervised classifiers such as conventional classifiers and neural networks (MLP, Transformers, FCN) with the semi-supervised label propagation algorithm. With a precision of 89%, a recall of 95% and an F1 score of 92%, the KNN supervised model gives good results, but the semi-supervised algorithm performs better in detecting artifacts. The findings suggest that the semi-supervised algorithm label propagation hold promise for artifact detection in PPG signals, which can enhance the reliability of PPG-based health monitoring systems in real-world applications.
This study investigates artifact detection in clinical photoplethysmogram signals using Transformer-based models. Recent findings have shown that in detecting artifacts from the Pediatric Critical Care Unit at CHU Sainte-Justine (CHUSJ), semi-supervised learning label propagation and conventional supervised machine learning (K-nearest neighbors) outperform the Transformer-based attention mechanism, particularly in limited data scenarios. However, these methods exhibit sensitivity to data volume and limited improvement with increased data availability. We propose the GRN-Transformer, an innovative model that integrates the Gated Residual Network (GRN) into the Transformer architecture to overcome these limitations. The GRN-Transformer demonstrates superior performance, achieving remarkable metrics of 98% accuracy, 90% precision, 97% recall, and 93% F1 score, clearly surpassing the Transformer's results of 95% accuracy, 85% precision, 86% recall, and 85% F1 score. By integrating the GRN, which excels at feature extraction, with the Transformer's attention mechanism, the proposed GRN-Transformer overcomes the limitations of previous methods. It achieves smoother training and validation loss, effectively mitigating overfitting and demonstrating enhanced performance in small datasets with imbalanced classes. The GRN-Transformer's potential impact on artifact detection can significantly improve the reliability and accuracy of the clinical decision support system at CHUSJ, ultimately leading to improved patient outcomes and safety. Remarkably, the proposed model stands as the pioneer in its domain, being the first of its kind to detect artifacts from PPG signals. Further research could explore its applicability to other medical domains and datasets with similar constraints.
In recent years, Transformer-based models such as the Switch Transformer have achieved remarkable results in natural language processing tasks. However, these models are often too complex and require extensive pre-training, which limits their effectiveness for small clinical text classification tasks with limited data. In this study, we propose a simplified Switch Transformer framework and train it from scratch on a small French clinical text classification dataset at CHU Sainte-Justine hospital. Our results demonstrate that the simplified small-scale Transformer models outperform pre-trained BERT-based models, including DistillBERT, CamemBERT, FlauBERT, and FrALBERT. Additionally, using a mixture of expert mechanisms from the Switch Transformer helps capture diverse patterns; hence, the proposed approach achieves better results than a conventional Transformer with the self-attention mechanism. Finally, our proposed framework achieves an accuracy of 87\%, precision at 87\%, and recall at 85\%, compared to the third-best pre-trained BERT-based model, FlauBERT, which achieved an accuracy of 84\%, precision at 84\%, and recall at 84\%. However, Switch Transformers have limitations, including a generalization gap and sharp minima. We compare it with a multi-layer perceptron neural network for small French clinical narratives classification and show that the latter outperforms all other models.
When dealing with clinical text classification on a small dataset recent studies have confirmed that a well-tuned multilayer perceptron outperforms other generative classifiers, including deep learning ones. To increase the performance of the neural network classifier, feature selection for the learning representation can effectively be used. However, most feature selection methods only estimate the degree of linear dependency between variables and select the best features based on univariate statistical tests. Furthermore, the sparsity of the feature space involved in the learning representation is ignored. Goal: Our aim is therefore to access an alternative approach to tackle the sparsity by compressing the clinical representation feature space, where limited French clinical notes can also be dealt with effectively. Methods: This study proposed an autoencoder learning algorithm to take advantage of sparsity reduction in clinical note representation. The motivation was to determine how to compress sparse, high-dimensional data by reducing the dimension of the clinical note representation feature space. The classification performance of the classifiers was then evaluated in the trained and compressed feature space. Results: The proposed approach provided overall performance gains of up to 3% for each evaluation. Finally, the classifier achieved a 92% accuracy, 91% recall, 91% precision, and 91% f1-score in detecting the patient's condition. Furthermore, the compression working mechanism and the autoencoder prediction process were demonstrated by applying the theoretic information bottleneck framework.
This paper proposes a joint clinical natural language representation learning and supervised classification framework based on machine learning for detecting concept labels in clinical narratives at CHU Sainte Justine Hospital (CHUSJ). The novel framework jointly discovers distributional syntactic and latent semantic (representation learning) from contextual clinical narrative inputs and, then, learns the knowledge representation for labeling in the contextual output (supervised classification). First, for having an effective representation learning approach with a small data set, mixing of numeric values and texts. Four different methods are applied to capture the numerical vital sign values. Then, different representation learning approaches are using to discover the rich structure from clinical narrative data. Second, for an automatic encounter with disease prediction, in this case, cardiac failure. The binary classifiers are iteratively trained to learn the knowledge representation of processed data in the preceding steps. The multilayer perceptron neural network outperforms other discriminative and generative classifiers. Consequently, the proposed framework yields an overall classification performance with accuracy, recall, and precision of 89 % and 88 %, 89 %, respectively. Furthermore, a generative autoencoder (AE) learning algorithm is then proposed to leverage the sparsity reduction. Affirmatively, AE algorithm is overperforming other sparsity reduction techniques. And, the classifier performances can successfully achieve up to 91 %, 91%, and 91%, respectively, for accuracy, recall, and precision.
The purpose of the study presented herein is to develop a machine learning algorithm based on natural language processing that automatically detects whether a patient has a cardiac failure or a healthy condition by using physician notes in Research Data Warehouse at CHU Sainte Justine Hospital. First, a word representation learning technique was employed by using bag-of-word (BoW), term frequency inverse document frequency (TFIDF), and neural word embeddings (word2vec). Each representation technique aims to retain the words semantic and syntactic analysis in critical care data. It helps to enrich the mutual information for the word representation and leads to an advantage for further appropriate analysis steps. Second, a machine learning classifier was used to detect the patients condition for either cardiac failure or stable patient through the created word representation vector space from the previous step. This machine learning approach is based on a supervised binary classification algorithm, including logistic regression (LR), Gaussian Naive-Bayes (GaussianNB), and multilayer perceptron neural network (MLPNN). Technically, it mainly optimizes the empirical loss during training the classifiers. As a result, an automatic learning algorithm would be accomplished to draw a high classification performance, including accuracy (acc), precision (pre), recall (rec), and F1 score (f1). The results show that the combination of TFIDF and MLPNN always outperformed other combinations with all overall performance. In the case without any feature selection, the proposed framework yielded an overall classification performance with acc, pre, rec, and f1 of 84% and 82%, 85%, and 83%, respectively. Significantly, if the feature selection was well applied, the overall performance would finally improve up to 4% for each evaluation.