Cognitive load assessment is crucial for understanding human performance in various domains. This study investigates the impact of different task conditions and time constraints on cognitive load using multiple measures, including subjective evaluations, performance metrics, and physiological eye-tracking data. Fifteen participants completed a series of primary and secondary tasks with different time limits. The NASA-TLX questionnaire, reaction time, inverse efficiency score, and eye-related features (blink, saccade, and fixation frequency) were utilized to assess cognitive load. The study results show significant differences in the level of cognitive load required for different tasks and when under time constraints. The study also found that there was a positive correlation (r = 0.331, p = 0.014) between how often participants blinked their eyes and the level of cognitive load required but a negative correlation (r = -0.290, p = 0.032) between how often participants made quick eye movements (saccades) and the level of cognitive load required. Additionally, the analysis revealed a significant negative correlation (r = -0.347, p = 0.009) and (r = -0.370, p = 0.005) between fixation and saccade frequencies under time constraints.
Safety concerns have risen as robots become more integrated into our daily lives and continue to interact closely with humans. One of the most crucial safety priorities is preventing collisions between robots and people walking nearby. Despite developing various algorithms to address this issue, evaluating their effectiveness on a cost-effective test bench remains a significant challenge. In this work, we propose a solution by introducing a simple yet functional platform that enables researchers and developers to assess how humans interact with mobile robots. This platform is designed to provide a quick yet accurate evaluation of the performance of safe interaction algorithms and make informed decisions for future development. The platform's features and structure are detailed, along with the initial testing results using two preliminary algorithms. The results obtained from the evaluation were consistent with theoretical calculations, demonstrating its effectiveness in assessing human-robot interaction. Our solution provides a preliminary yet reliable approach to ensure the safety of both robots and humans in their daily interactions.
People suffering from Alzheimer's disease (AD) and their caregivers seek different approaches to cope with memory loss. Although AD patients want to live independently, they often need help from caregivers. In this situation, caregivers may attach notes on every single object or take out the contents of a drawer to make them visible before leaving the patient alone at home. This study reports preliminary results on an Ambient Assisted Living (AAL) real-time system, achieved through the Internet of Things (IoT) and Augmented Reality (AR) concepts, aimed at helping people suffering from AD. The system has two main sections: the smartphone or windows application allows caregivers to monitor patients' status at home and be notified if patients are at risk. The second part allows patients to use smart glasses to recognize QR codes in the environment and receive information related to tags in the form of audio, text, or three-dimensional image. This work presents preliminary results and investigates the possibility of implementing such a system.