Abstract:Current invasive assistive technologies are designed to infer high-dimensional motor control signals from severely paralyzed patients. However, they face significant challenges, including public acceptance, limited longevity, and barriers to commercialization. Meanwhile, noninvasive alternatives often rely on artifact-prone signals, require lengthy user training, and struggle to deliver robust high-dimensional control for dexterous tasks. To address these issues, this study introduces a novel human-centered multimodal AI approach as intelligent compensatory mechanisms for lost motor functions that could potentially enable patients with severe paralysis to control high-dimensional assistive devices, such as dexterous robotic arms, using limited and noninvasive inputs. In contrast to the current state-of-the-art (SoTA) noninvasive approaches, our context-aware, multimodal shared-autonomy framework integrates deep reinforcement learning algorithms to blend limited low-dimensional user input with real-time environmental perception, enabling adaptive, dynamic, and intelligent interpretation of human intent for complex dexterous manipulation tasks, such as pick-and-place. The results from our ARAS (Adaptive Reinforcement learning for Amplification of limited inputs in Shared autonomy) trained with synthetic users over 50,000 computer simulation episodes demonstrated the first successful implementation of the proposed closed-loop human-in-the-loop paradigm, outperforming the SoTA shared autonomy algorithms. Following a zero-shot sim-to-real transfer, ARAS was evaluated on 23 human subjects, demonstrating high accuracy in dynamic intent detection and smooth, stable 3D trajectory control for dexterous pick-and-place tasks. ARAS user study achieved a high task success rate of 92.88%, with short completion times comparable to those of SoTA invasive assistive technologies.
Abstract:End-effector based assistive robots face persistent challenges in generating smooth and robust trajectories when controlled by human's noisy and unreliable biosignals such as muscle activities and brainwaves. The produced endpoint trajectories are often jerky and imprecise to perform complex tasks such as stable robotic grasping. We propose STREAMS (Self-Training Robotic End-to-end Adaptive Multimodal Shared autonomy) as a novel framework leveraged deep reinforcement learning to tackle this challenge in biosignal based robotic control systems. STREAMS blends environmental information and synthetic user input into a Deep Q Learning Network (DQN) pipeline for an interactive end-to-end and self-training mechanism to produce smooth trajectories for the control of end-effector based robots. The proposed framework achieved a high-performance record of 98% in simulation with dynamic target estimation and acquisition without any pre-existing datasets. As a zero-shot sim-to-real user study with five participants controlling a physical robotic arm with noisy head movements, STREAMS (as an assistive mode) demonstrated significant improvements in trajectory stabilization, user satisfaction, and task performance reported as a success rate of 83% compared to manual mode which was 44% without any task support. STREAMS seeks to improve biosignal based assistive robotic controls by offering an interactive, end-to-end solution that stabilizes end-effector trajectories, enhancing task performance and accuracy.
Abstract:With recent advancements in AI and computation tools, intelligent paradigms emerged to empower different fields such as healthcare robots with new capabilities. Advanced AI robotic algorithms (e.g., reinforcement learning) can be trained and developed to autonomously make individual decisions to achieve a desired and usually fixed goal. However, such independent decisions and goal achievements might not be ideal for a healthcare robot that usually interacts with a dynamic end-user or a patient. In such a complex human-robot interaction (teaming) framework, the dynamic user continuously wants to be involved in decision-making as well as introducing new goals while interacting with their present environment in real-time. To address this challenge, an adaptive shared autonomy AI paradigm is required to be developed for the two interactive agents (Human & AI agents) with a foundation based on human-centered factors to avoid any possible ethical issues and guarantee no harm to humanity.
Abstract:In this pioneering study, we unveiled a groundbreaking approach for actuating rehabilitation robots through the innovative use of magnetic technology as a seamless haptic force generator, offering a leap forward in enhancing user interface and experience, particularly in end-effector-based robots for upper-limb extremity motor rehabilitation. We employed the Extended Kalman Filter to meticulously analyze and formalize the robotic system's nonlinear dynamics, showcasing the potential of this sophisticated algorithm in accurately tracking and compensating for disturbances, thereby ensuring seamless and effective motor training. The proposed planar robotic system embedded with magnetic technology was evaluated with the recruitment of human subjects. We reached a minimum RMS value of 0.2 and a maximum of 2.06 in our estimations, indicating our algorithm's capability for tracking the system behavior. Overall, the results showed significant improvement in smoothness, comfort, and safety during execution and motor training. The proposed novel magnetic actuation and advanced algorithmic control opens new horizons for the development of more efficient and user-friendly rehabilitation technologies.
Abstract:There have been different reports of developing Brain-Computer Interface (BCI) platforms to investigate the noninvasive electroencephalography (EEG) signals associated with plan-to-grasp tasks in humans. However, these reports were unable to clearly show evidence of emerging neural activity from the planning (observation) phase - dominated by the vision cortices - to grasp execution - dominated by the motor cortices. In this study, we developed a novel vision-based grasping BCI platform that distinguishes different grip types (power and precision) through the phases of plan-to-grasp tasks using EEG signals. Using our platform and extracting features from Filter Bank Common Spatial Patterns (FBCSP), we show that frequency-band specific EEG contains discriminative spatial patterns present in both the observation and movement phases. Support Vector Machine (SVM) classification (power vs precision) yielded high accuracy percentages of 74% and 68% for the observation and movement phases in the alpha band, respectively.
Abstract:The bispectrum stands out as a revolutionary tool in frequency domain analysis, leaping the usual power spectrum by capturing crucial phase information between frequency components. In our innovative study, we have utilized the bispectrum to analyze and decode complex grasping movements, gathering EEG data from five human subjects. We put this data through its paces with three classifiers, focusing on both magnitude and phase-related features. The results highlight the bispectrum's incredible ability to delve into neural activity and differentiate between various grasping motions with the Support Vector Machine (SVM) classifier emerging as a standout performer. In binary classification, it achieved a remarkable 97\% accuracy in identifying power grasp, and in the more complex multiclass tasks, it maintained an impressive 94.93\% accuracy. This finding not only underscores the bispectrum's analytical strength but also showcases the SVM's exceptional capability in classification, opening new doors in our understanding of movement and neural dynamics.