Control design for robotic systems is complex and often requires solving an optimization to follow a trajectory accurately. Online optimization approaches like Model Predictive Control (MPC) have been shown to achieve great tracking performance, but require high computing power. Conversely, learning-based offline optimization approaches, such as Reinforcement Learning (RL), allow fast and efficient execution on the robot but hardly match the accuracy of MPC in trajectory tracking tasks. In systems with limited compute, such as aerial vehicles, an accurate controller that is efficient at execution time is imperative. We propose an Analytic Policy Gradient (APG) method to tackle this problem. APG exploits the availability of differentiable simulators by training a controller offline with gradient descent on the tracking error. We address training instabilities that frequently occur with APG through curriculum learning and experiment on a widely used controls benchmark, the CartPole, and two common aerial robots, a quadrotor and a fixed-wing drone. Our proposed method outperforms both model-based and model-free RL methods in terms of tracking error. Concurrently, it achieves similar performance to MPC while requiring more than an order of magnitude less computation time. Our work provides insights into the potential of APG as a promising control method for robotics. To facilitate the exploration of APG, we open-source our code and make it available at https://github.com/lis-epfl/apg_trajectory_tracking.
Gesture-based interfaces are often used to achieve a more natural and intuitive teleoperation of robots. Yet, sometimes, gesture control requires postures or movements that cause significant fatigue to the user. In a previous user study, we demonstrated that na\"ive users can control a fixed-wing drone with torso movements while their arms are spread out. However, this posture induced significant arm fatigue. In this work, we present a passive arm support that compensates the arm weight with a mean torque error smaller than 0.005 N/kg for more than 97% of the range of motion used by subjects to fly, therefore reducing muscular fatigue in the shoulder of on average 58%. In addition, this arm support is designed to fit users from the body dimension of the 1st percentile female to the 99th percentile male. The performance analysis of the arm support is described with a mechanical model and its implementation is validated with both a mechanical characterization and a user study, which measures the flight performance, the shoulder muscle activity and the user acceptance.
People learn motor activities best when they are conscious of their errors and make a concerted effort to correct them. While haptic interfaces can facilitate motor training, existing interfaces are often bulky and do not always ensure post-training skill retention. Here, we describe a programmable haptic sleeve composed of textile-based electroadhesive clutches for skill acquisition and retention. We show its functionality in a motor learning study where users control a drone's movement using elbow joint rotation. Haptic feedback is used to restrain elbow motion and make users aware of their errors. This helps users consciously learn to avoid errors from occurring. While all subjects exhibited similar performance during the baseline phase of motor learning, those subjects who received haptic feedback from the haptic sleeve committed 23.5% fewer errors than subjects in the control group during the evaluation phase. The results show that the sleeve helps users retain and transfer motor skills better than visual feedback alone. This work shows the potential for fabric-based haptic interfaces as a training aid for motor tasks in the fields of rehabilitation and teleoperation.
People learn motor activities best when they are conscious of their errors and make a concerted effort to correct them. While haptic interfaces can facilitate motor training, existing interfaces are often bulky and do not always ensure post-training skill retention. Here, we describe a programmable haptic sleeve composed of textile-based electroadhesive clutches for skill acquisition and retention. We show its functionality in a motor learning study where users control a drone's movement using elbow joint rotation. Haptic feedback is used to restrain elbow motion and make users aware of their errors. This helps users consciously learn to avoid errors from occurring. While all subjects exhibited similar performance during the baseline phase of motor learning, those subjects who received haptic feedback from the haptic sleeve committed 23.5% fewer errors than subjects in the control group during the evaluation phase. The results show that the sleeve helps users retain and transfer motor skills better than visual feedback alone. This work shows the potential for fabric-based haptic interfaces as a training aid for motor tasks in the fields of rehabilitation and teleoperation.
Designing optimal soft modular robots is difficult, due to non-trivial interactions between morphology and controller. Evolutionary algorithms (EAs), combined with physical simulators, represent a valid tool to overcome this issue. In this work, we investigate algorithmic solutions to improve the Quality Diversity of co-evolved designs of Tensegrity Soft Modular Robots (TSMRs) for two robotic tasks, namely goal reaching and squeezing trough a narrow passage. To this aim, we use three different EAs, i.e., MAP-Elites and two custom algorithms: one based on Viability Evolution (ViE) and NEAT (ViE-NEAT), the other named Double Map MAP-Elites (DM-ME) and devised to seek diversity while co-evolving robot morphologies and neural network (NN)-based controllers. In detail, DM-ME extends MAP-Elites in that it uses two distinct feature maps, referring to morphologies and controllers respectively, and integrates a mechanism to automatically define the NN-related feature descriptor. Considering the fitness, in the goal-reaching task ViE-NEAT outperforms MAP-Elites and results equivalent to DM-ME. Instead, when considering diversity in terms of "illumination" of the feature space, DM-ME outperforms the other two algorithms on both tasks, providing a richer pool of possible robotic designs, whereas ViE-NEAT shows comparable performance to MAP-Elites on goal reaching, although it does not exploit any map.
The control of collective robotic systems, such as drone swarms, is often delegated to autonomous navigation algorithms due to their high dimensionality. However, like other robotic entities, drone swarms can still benefit from being teleoperated by human operators, whose perception and decision-making capabilities are still out of the reach of autonomous systems. Drone swarm teleoperation is only at its dawn, and a standard human-swarm interface (HRI) is missing to date. In this study, we analyzed the spontaneous interaction strategies of naive users with a swarm of drones. We implemented a machine-learning algorithm to define a personalized Body-Machine Interface (BoMI) based only on a short calibration procedure. During this procedure, the human operator is asked to move spontaneously as if they were in control of a simulated drone swarm. We assessed that hands are the most commonly adopted body segment, and thus we chose a LEAP Motion controller to track them to let the users control the aerial drone swarm. This choice makes our interface portable since it does not rely on a centralized system for tracking the human body. We validated our algorithm to define personalized HRIs for a set of participants in a realistic simulated environment, showing promising results in performance and user experience. Our method leaves unprecedented freedom to the user to choose between position and velocity control only based on their body motion preferences.
Dynamic environments such as urban areas are still challenging for popular visual-inertial odometry (VIO) algorithms. Existing datasets typically fail to capture the dynamic nature of these environments, therefore making it difficult to quantitatively evaluate the robustness of existing VIO methods. To address this issue, we propose three contributions: firstly, we provide the VIODE benchmark, a novel dataset recorded from a simulated UAV that navigates in challenging dynamic environments. The unique feature of the VIODE dataset is the systematic introduction of moving objects into the scenes. It includes three environments, each of which is available in four dynamic levels that progressively add moving objects. The dataset contains synchronized stereo images and IMU data, as well as ground-truth trajectories and instance segmentation masks. Secondly, we compare state-of-the-art VIO algorithms on the VIODE dataset and show that they display substantial performance degradation in highly dynamic scenes. Thirdly, we propose a simple extension for visual localization algorithms that relies on semantic information. Our results show that scene semantics are an effective way to mitigate the adverse effects of dynamic objects on VIO algorithms. Finally, we make the VIODE dataset publicly available at https://github.com/kminoda/VIODE.
The operation of telerobotic systems can be a challenging task, requiring intuitive and efficient interfaces to enable inexperienced users to attain a high level of proficiency. Body-Machine Interfaces (BoMI) represent a promising alternative to standard control devices, such as joysticks, because they leverage intuitive body motion and gestures. It has been shown that the use of Virtual Reality (VR) and first-person view perspectives can increase the user's sense of presence in avatars. However, it is unclear if these beneficial effects occur also in the teleoperation of non-anthropomorphic robots that display motion patterns different from those of humans. Here we describe experimental results on teleoperation of a non-anthropomorphic drone showing that VR correlates with a higher sense of spatial presence, whereas viewpoints moving coherently with the robot are associated with a higher sense of embodiment. Furthermore, the experimental results show that spontaneous body motion patterns are affected by VR and viewpoint conditions in terms of variability, amplitude, and robot correlates, suggesting that the design of BoMIs for drone teleoperation must take into account the use of Virtual Reality and the choice of the viewpoint.
Tensegrity structures are lightweight, can undergo large deformations, and have outstanding robustness capabilities. These unique properties inspired roboticists to investigate their use. However, the morphological design, control, assembly, and actuation of tensegrity robots are still difficult tasks. Moreover, the stiffness of tensegrity robots is still an underestimated design parameter. In this article, we propose to use easy to assemble, actuated tensegrity modules and body-brain co-evolution to design soft tensegrity modular robots. Moreover, we prove the importance of tensegrity robots stiffness showing how the evolution suggests a different morphology, control, and locomotion strategy according to the modules stiffness.
Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. This letter proposes a vision-based detection and tracking algorithm that enables groups of drones to navigate without communication or visual markers. We employ a convolutional neural network to detect and localize nearby agents onboard the quadcopters in real-time. Rather than manually labeling a dataset, we automatically annotate images to train the neural network using background subtraction by systematically flying a quadcopter in front of a static camera. We use a multi-agent state tracker to estimate the relative positions and velocities of nearby agents, which are subsequently fed to a flocking algorithm for high-level control. The drones are equipped with multiple cameras to provide omnidirectional visual inputs. The camera setup ensures the safety of the flock by avoiding blind spots regardless of the agent configuration. We evaluate the approach with a group of three real quadcopters that are controlled using the proposed vision-based flocking algorithm. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.