Abstract:In this paper, we present a dataset of diving gesture images used for human-robot interaction underwater. By offering this open access dataset, the paper aims at investigating the potential of using visual detection of diving gestures from an autonomous underwater vehicle (AUV) as a form of communication with a human diver. In addition to the image recording, the same dataset was recorded using a smart gesture recognition glove. The glove uses elastomer sensors and on-board processing to determine the selected gesture and transmit the command associated with the gesture to the AUV via acoustics. Although this method can be used under different visibility conditions and even without line of sight, it introduces a communication delay required for the acoustic transmission of the gesture command. To compare efficiency, the glove was equipped with visual markers proposed in a gesture-based language called CADDIAN and recorded with an underwater camera in parallel to the glove's onboard recognition process. The dataset contains over 30,000 underwater frames of nearly 900 individual gestures annotated in corresponding snippet folders. The dataset was recorded in a balanced ratio with five different divers in sea and five different divers in pool conditions, with gestures recorded at 1, 2 and 3 metres from the camera. The glove gesture recognition statistics are reported in terms of average diver reaction time, average time taken to perform a gesture, recognition success rate, transmission times and more. The dataset presented should provide a good baseline for comparing the performance of state of the art visual diving gesture recognition techniques under different visibility conditions.
Abstract:The research presented in this paper is aimed at developing a control algorithm for an autonomous surface system carrying a two-sensor array consisting of two acoustic receivers, capable of measuring the time-difference-of-arrival (TDOA) of a quasiperiodic underwater acoustic signal and utilizing this value to steer the system toward the acoustic source in the horizontal plane. Stability properties of the proposed algorithm are analyzed using the Lie bracket approximation technique. Furthermore, simulation results are presented, where particular attention is given to the relationship between the time difference of arrival measurement noise and the sensor baseline - the distance between the two acoustic receivers. Also, the influence of a constant disturbance caused by sea currents is considered. Finally, experimental results in which the algorithm was deployed on two autonomous surface vehicles, each equipped with a single acoustic receiver, are presented. The algorithm successfully steers the vehicle formation toward the acoustic source, despite the measurement noise and intermittent measurements, thus showing the feasibility of the proposed algorithm in real-life conditions.
Abstract:In this paper, we address the problem of autonomous search and vessel detection in an unknown GNSS-denied maritime environment with fixed-wing UAVs. The main challenge in such environments with limited localization, communication range, and the total number of UAVs and sensors is to implement an appropriate search strategy so that a target vessel can be detected as soon as possible. Thus we present informed and non-informed methods used to search the environment. The informed method relies on an obtained probabilistic map, while the non-informed method navigates the UAVs along predefined paths computed with respect to the environment. The vessel detection method is trained on synthetic data collected in the simulator with data annotation tools. Comparative experiments in simulation have shown that our combination of sensors, search methods and a vessel detection algorithm leads to a successful search for the target vessel in such challenging environments.