It is challenging to develop two thoughts at the same time or perform two uncorrelated motions simultaneously. This work looks specifically towards training humans to perform a 2:3 polyrhythmic bimanual ratio using haptic force feedback devices (SensAble Phantom OMNI). We implemented an interactive training session to help participants learn to decouple their hand motions quickly. Three subjects (2 Females, 1 Male) were tested and have successfully increased their scores after adaptive training durations of under five minutes.
Brain-Computer interfaces (BCI) are widely used in reading brain signals and converting them into real-world motion. However, the signals produced from the BCI are noisy and hard to analyze. This paper looks specifically towards combining the BCI's latest technology with ultrasonic sensors to provide a hands-free wheelchair that can efficiently navigate through crowded environments. This combination provides safety and obstacle avoidance features necessary for the BCI Navigation system to gain more confidence and operate the wheelchair at a relatively higher velocity. A population of six human subjects tested the BCI-controller and obstacle avoidance features. Subjects were able to mentally control the destination of the wheelchair, by moving the target from the starting position to a predefined position, in an average of 287.12 seconds and a standard deviation of 48.63 seconds after 10 minutes of training. The wheelchair successfully avoided all obstacles placed by the subjects during the test.
Temporal event segmentation of a long video into coherent events requires a high level understanding of activities' temporal features. The event segmentation problem has been tackled by researchers in an offline training scheme, either by providing full, or weak, supervision through manually annotated labels or by self-supervised epoch based training. In this work, we present a continual learning perceptual prediction framework (influenced by cognitive psychology) capable of temporal event segmentation through understanding of the underlying representation of objects within individual frames. Our framework also outputs attention maps which effectively localize and track events-causing objects in each frame. The model is tested on a wildlife monitoring dataset in a continual training manner resulting in $80\%$ recall rate at $20\%$ false positive rate for frame level segmentation. Activity level testing has yielded $80\%$ activity recall rate for one false activity detection every 50 minutes.