



Abstract:Chronic obstructive pulmonary disease (COPD) represents a significant global health burden, where precise severity assessment is particularly critical for effective clinical management in intensive care unit (ICU) settings. This study introduces an innovative machine learning framework for COPD severity classification utilizing the MIMIC-III critical care database, thereby expanding the applications of artificial intelligence in critical care medicine. Our research developed a robust classification model incorporating key ICU parameters such as blood gas measurements and vital signs, while implementing semi-supervised learning techniques to effectively utilize unlabeled data and enhance model performance. The random forest classifier emerged as particularly effective, demonstrating exceptional discriminative capability with 92.51% accuracy and 0.98 ROC AUC in differentiating between mild-to-moderate and severe COPD cases. This machine learning approach provides clinicians with a practical, accurate, and efficient tool for rapid COPD severity evaluation in ICU environments, with significant potential to improve both clinical decision-making processes and patient outcomes. Future research directions should prioritize external validation across diverse patient populations and integration with clinical decision support systems to optimize COPD management in critical care settings.
Abstract:This paper presents an autonomous parking control system for an active-joint center-articulated mobile robot. We begin by proposing a kinematic model of the robot, then derive a control law designed to stabilize the vehicle's configuration within a small neighborhood of the target position. The control law is developed using Lyapunov techniques and is based on the robot's equations of motion in polar coordinates. Additionally, a beacon-based guidance system provides real-time feedback on the target's position and orientation. Simulation results demonstrate the robot's capability to start from arbitrary initial positions and orientations and successfully achieve parking.
Abstract:Spatial navigation is a complex cognitive function involving sensory inputs, such as visual, auditory, and proprioceptive information, to understand and move within space. This ability allows humans to create mental maps, navigate through environments, and process directional cues, crucial for exploring new places and finding one's way in unfamiliar surroundings. This study takes an algorithmic approach to extract indices relevant to human spatial navigation using eye movement data. Leveraging electrooculography signals, we analyzed statistical features and applied feature engineering techniques to study eye movements during navigation tasks. The proposed work combines signal processing and machine learning approaches to develop indices for navigation and orientation, spatial anxiety, landmark recognition, path survey, and path route. The analysis yielded five subscore indices with notable accuracy. Among these, the navigation and orientation subscore achieved an R2 score of 0.72, while the landmark recognition subscore attained an R2 score of 0.50. Additionally, statistical features highly correlated with eye movement metrics, including blinks, saccades, and fixations, were identified. The findings of this study can lead to more cognitive assessments and enable early detection of spatial navigation impairments, particularly among individuals at risk of cognitive decline.
Abstract:Cognitive load assessment is crucial for understanding human performance in various domains. This study investigates the impact of different task conditions and time constraints on cognitive load using multiple measures, including subjective evaluations, performance metrics, and physiological eye-tracking data. Fifteen participants completed a series of primary and secondary tasks with different time limits. The NASA-TLX questionnaire, reaction time, inverse efficiency score, and eye-related features (blink, saccade, and fixation frequency) were utilized to assess cognitive load. The study results show significant differences in the level of cognitive load required for different tasks and when under time constraints. The study also found that there was a positive correlation (r = 0.331, p = 0.014) between how often participants blinked their eyes and the level of cognitive load required but a negative correlation (r = -0.290, p = 0.032) between how often participants made quick eye movements (saccades) and the level of cognitive load required. Additionally, the analysis revealed a significant negative correlation (r = -0.347, p = 0.009) and (r = -0.370, p = 0.005) between fixation and saccade frequencies under time constraints.




Abstract:Safety concerns have risen as robots become more integrated into our daily lives and continue to interact closely with humans. One of the most crucial safety priorities is preventing collisions between robots and people walking nearby. Despite developing various algorithms to address this issue, evaluating their effectiveness on a cost-effective test bench remains a significant challenge. In this work, we propose a solution by introducing a simple yet functional platform that enables researchers and developers to assess how humans interact with mobile robots. This platform is designed to provide a quick yet accurate evaluation of the performance of safe interaction algorithms and make informed decisions for future development. The platform's features and structure are detailed, along with the initial testing results using two preliminary algorithms. The results obtained from the evaluation were consistent with theoretical calculations, demonstrating its effectiveness in assessing human-robot interaction. Our solution provides a preliminary yet reliable approach to ensure the safety of both robots and humans in their daily interactions.
Abstract:People suffering from Alzheimer's disease (AD) and their caregivers seek different approaches to cope with memory loss. Although AD patients want to live independently, they often need help from caregivers. In this situation, caregivers may attach notes on every single object or take out the contents of a drawer to make them visible before leaving the patient alone at home. This study reports preliminary results on an Ambient Assisted Living (AAL) real-time system, achieved through the Internet of Things (IoT) and Augmented Reality (AR) concepts, aimed at helping people suffering from AD. The system has two main sections: the smartphone or windows application allows caregivers to monitor patients' status at home and be notified if patients are at risk. The second part allows patients to use smart glasses to recognize QR codes in the environment and receive information related to tags in the form of audio, text, or three-dimensional image. This work presents preliminary results and investigates the possibility of implementing such a system.