Abstract:The operation of telerobotic systems can be a challenging task, requiring intuitive and efficient interfaces to enable inexperienced users to attain a high level of proficiency. Body-Machine Interfaces (BoMI) represent a promising alternative to standard control devices, such as joysticks, because they leverage intuitive body motion and gestures. It has been shown that the use of Virtual Reality (VR) and first-person view perspectives can increase the user's sense of presence in avatars. However, it is unclear if these beneficial effects occur also in the teleoperation of non-anthropomorphic robots that display motion patterns different from those of humans. Here we describe experimental results on teleoperation of a non-anthropomorphic drone showing that VR correlates with a higher sense of spatial presence, whereas viewpoints moving coherently with the robot are associated with a higher sense of embodiment. Furthermore, the experimental results show that spontaneous body motion patterns are affected by VR and viewpoint conditions in terms of variability, amplitude, and robot correlates, suggesting that the design of BoMIs for drone teleoperation must take into account the use of Virtual Reality and the choice of the viewpoint.
Abstract:Tensegrity structures are lightweight, can undergo large deformations, and have outstanding robustness capabilities. These unique properties inspired roboticists to investigate their use. However, the morphological design, control, assembly, and actuation of tensegrity robots are still difficult tasks. Moreover, the stiffness of tensegrity robots is still an underestimated design parameter. In this article, we propose to use easy to assemble, actuated tensegrity modules and body-brain co-evolution to design soft tensegrity modular robots. Moreover, we prove the importance of tensegrity robots stiffness showing how the evolution suggests a different morphology, control, and locomotion strategy according to the modules stiffness.
Abstract:Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. This letter proposes a vision-based detection and tracking algorithm that enables groups of drones to navigate without communication or visual markers. We employ a convolutional neural network to detect and localize nearby agents onboard the quadcopters in real-time. Rather than manually labeling a dataset, we automatically annotate images to train the neural network using background subtraction by systematically flying a quadcopter in front of a static camera. We use a multi-agent state tracker to estimate the relative positions and velocities of nearby agents, which are subsequently fed to a flocking algorithm for high-level control. The drones are equipped with multiple cameras to provide omnidirectional visual inputs. The camera setup ensures the safety of the flock by avoiding blind spots regardless of the agent configuration. We evaluate the approach with a group of three real quadcopters that are controlled using the proposed vision-based flocking algorithm. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.
Abstract:Human-Robot Interfaces (HRIs) represent a crucial component in telerobotic systems. Body-Machine Interfaces (BoMIs) based on body motion can feel more intuitive than standard HRIs for naive users as they leverage humans' natural control capability over their movements. Among the different methods used to map human gestures into robot commands, data-driven approaches select a set of body segments and transform their motion into commands for the robot based on the users' spontaneous motion patterns. Despite being a versatile and generic method, there is no scientific evidence that implementing an interface based on spontaneous motion maximizes its effectiveness. In this study, we compare a set of BoMIs based on different body segments to investigate this aspect. We evaluate the interfaces in a teleoperation task of a fixed-wing drone and observe users' performance and feedback. To this aim, we use a framework that allows a user to control the drone with a single Inertial Measurement Unit (IMU) and without prior instructions. We show through a user study that selecting the body segment for a BoMI based on spontaneous motion can lead to sub-optimal performance. Based on our findings, we suggest additional metrics based on biomechanical and behavioral factors that might improve data-driven methods for the design of HRIs.
Abstract:Among the available solutions for drone swarm simulations, we identified a gap in simulation frameworks that allow easy algorithms prototyping, tuning, debugging and performance analysis, and do not require the user to interface with multiple programming languages. We present SwarmLab, a software entirely written in Matlab, that aims at the creation of standardized processes and metrics to quantify the performance and robustness of swarm algorithms, and in particular, it focuses on drones. We showcase the functionalities of SwarmLab by comparing two state-of-the-art algorithms for the navigation of aerial swarms in cluttered environments, Olfati-Saber's and Vasarhelyi's. We analyze the variability of the inter-agent distances and agents' speeds during flight. We also study some of the performance metrics presented, i.e. order, inter and extra-agent safety, union, and connectivity. While Olfati-Saber's approach results in a faster crossing of the obstacle field, Vasarhelyi's approach allows the agents to fly smoother trajectories, without oscillations. We believe that SwarmLab is relevant for both the biological and robotics research communities, and for education, since it allows fast algorithm development, the automatic collection of simulated data, the systematic analysis of swarming behaviors with performance metrics inherited from the state of the art.
Abstract:Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.
Abstract:Besides being part of the Internet of Things (IoT), drones can play a relevant role in it as enablers. The 3D mobility of UAVs can be exploited to improve node localization in IoT networks for, e.g., search and rescue or goods localization and tracking. One of the widespread IoT communication technologies is Long Range Wide Area Network (LoRaWAN), which allows achieving long communication distances with low power. In this work, we present a drone-aided localization system for LoRa networks in which a UAV is used to improve the estimation of a node's location initially provided by the network. We characterize the relevant parameters of the communication system and use them to develop and test a search algorithm in a realistic simulated scenario. We then move to the full implementation of a real system in which a drone is seamlessly integrated into Swisscom's LoRa network. The drone coordinates with the network with a two-way exchange of information which results in an accurate and fully autonomous localization system. The results obtained in our field tests show a ten-fold improvement in localization precision with respect to the estimation provided by the fixed network. Up to our knowledge, this is the first time a UAV is successfully integrated in a LoRa network to improve its localization accuracy.
Abstract:Small unmanned aerial vehicles (UAV) have penetrated multiple domains over the past years. In GNSS-denied or indoor environments, aerial robots require a robust and stable localization system, often with external feedback, in order to fly safely. Motion capture systems are typically utilized indoors when accurate localization is needed. However, these systems are expensive and most require a fixed setup. Recently, visual-inertial odometry and similar methods have advanced to a point where autonomous UAVs can rely on them for localization. The main limitation in this case comes from the environment, as well as in long-term autonomy due to accumulating error if loop closure cannot be performed efficiently. For instance, the impact of low visibility due to dust or smoke in post-disaster scenarios might render the odometry methods inapplicable. In this paper, we study and characterize an ultra-wideband (UWB) system for navigation and localization of aerial robots indoors based on Decawave's DWM1001 UWB node. The system is portable, inexpensive and can be battery powered in its totality. We show the viability of this system for autonomous flight of UAVs, and provide open-source methods and data that enable its widespread application even with movable anchor systems. We characterize the accuracy based on the position of the UAV with respect to the anchors, its altitude and speed, and the distribution of the anchors in space. Finally, we analyze the accuracy of the self-calibration of the anchors' positions.
Abstract:Decentralized drone swarms deployed today either rely on sharing of positions among agents or detecting swarm members with the help of visual markers. This work proposes an entirely visual approach to coordinate markerless drone swarms based on imitation learning. Each agent is controlled by a small and efficient convolutional neural network that takes raw omnidirectional images as inputs and predicts 3D velocity commands that match those computed by a flocking algorithm. We start training in simulation and propose a simple yet effective unsupervised domain adaptation approach to transfer the learned controller to the real world. We further train the controller with data collected in our motion capture hall. We show that the convolutional neural network trained on the visual inputs of the drone can learn not only robust inter-agent collision avoidance but also cohesion of the swarm in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. We remove the dependence on sharing positions among swarm members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based swarm without the need for communication or visual markers.
Abstract:This paper presents a data-driven approach to learning vision-based collective behavior from a simple flocking algorithm. We simulate a swarm of quadrotor drones and formulate the controller as a regression problem in which we generate 3D velocity commands directly from raw camera images. The dataset is created by simultaneously acquiring omnidirectional images and computing the corresponding control command from the flocking algorithm. We show that a convolutional neural network trained on the visual inputs of the drone can learn not only robust collision avoidance but also coherence of the flock in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. This weakly supervised saliency map can be computed efficiently and may be used as a prior for subsequent detection and relative localization of other agents. We remove the dependence on sharing positions among flock members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based flock without the need for communication or visual markers to aid detection of other agents.