Artistic performances involving robotic systems present unique technical challenges akin to those encountered in other field deployments. In this paper, we delve into the orchestration of robotic artistic performances, focusing on the complexities inherent in communication protocols and localization methods. Through our case studies and experimental insights, we demonstrate the breadth of technical requirements for this type of deployment, and, most importantly, the significant contributions of working closely with non-experts.
Spherical robots have garnered increasing interest for their applications in exploration, tunnel inspection, and extraterrestrial missions. Diverse designs have emerged, including barycentric configurations, pendulum-based mechanisms, etc. In addition, a wide spectrum of control strategies has been proposed, ranging from traditional PID approaches to cutting-edge neural networks. Our systematic review aims to comprehensively identify and categorize locomotion systems and control schemes employed by spherical robots, spanning the years 1996 to 2023. A meticulous search across five databases yielded a dataset of 3189 records. As a result of our exhaustive analysis, we identified a collection of novel designs and control strategies. Leveraging the insights garnered, we provide valuable recommendations for optimizing the design and control aspects of spherical robots, supporting both novel design endeavors and the advancement of field deployments. Furthermore, we illuminate key research directions that hold the potential to unlock the full capabilities of spherical robots
Precision agriculture aims to use technological tools for the agro-food sector to increase productivity, cut labor costs, and reduce the use of resources. This work takes inspiration from bees vision to design a remote sensing system tailored to incorporate UV-reflectance into a flower detector. We demonstrate how this approach can provide feature-rich images for deep learning strawberry flower detection and we apply it to a scalable, yet cost effective aerial monitoring robotic system in the field. We also compare the performance of our UV-G-B image detector with a similar work that utilizes RGB images.
The drone industry is diversifying and the number of pilots increases rapidly. In this context, flight schools need adapted tools to train pilots, most importantly with regard to their own awareness of their physiological and cognitive limits. In civil and military aviation, pilots can train themselves on realistic simulators to tune their reaction and reflexes, but also to gather data on their piloting behavior and physiological states. It helps them to improve their performances. Opposed to cockpit scenarios, drone teleoperation is conducted outdoor in the field, thus with only limited potential from desktop simulation training. This work aims to provide a solution to gather pilots behavior out in the field and help them increase their performance. We combined advance object detection from a frontal camera to gaze and heart-rate variability measurements. We observed pilots and analyze their behavior over three flight challenges. We believe this tool can support pilots both in their training and in their regular flight tasks. A demonstration video is available on https://www.youtube.com/watch?v=eePhjd2qNiI
Several deployment locations of mobile robotic systems are human made (i.e. urban firefighter, building inspection, property security) and the manager may have access to domain-specific knowledge about the place, which can provide semantic contextual information allowing better reasoning and decision making. In this paper we propose a system that allows a mobile robot to operate in a location-aware and operator-friendly way, by leveraging semantic information from the deployment location and integrating it to the robots localization and navigation systems. We integrate Building Information Models (BIM) into the Robotic Operating System (ROS), to generate topological and metric maps fed to an layered path planner (global and local). A map merging algorithm integrates newly discovered obstacles into the metric map, while a UWB-based localization system detects equipment to be registered back into the semantic database. The results are validated in simulation and real-life deployments in buildings and construction sites.
With the growth in automated data collection of construction projects, the need for semantic navigation of mobile robots is increasing. In this paper, we propose an infrastructure to leverage building-related information for smarter, safer and more precise robot navigation during construction phase. Our use of Building Information Models (BIM) in robot navigation is twofold: (1) the intuitive semantic information enables non-experts to deploy robots and (2) the semantic data exposed to the navigation system allows optimal path planning (not necessarily the shortest one). Our Building Information Robotic System (BIRS) uses Industry Foundation Classes (IFC) as the interoperable data format between BIM and the Robotic Operating System (ROS). BIRS generates topological and metric maps from BIM for ROS usage. An optimal path planer, integrating critical components for construction assessment is proposed using a cascade strategy (global versus local). The results are validated through series of experiments in construction sites.
With growth in the use of autonomous Unmanned Ground Vehicle (UGV) for automated data collection from construction projects, the problem of inter-disciplinary semantic data sharing and exchanges between construction and robotic domains has attracted construction stakeholders' attention. Cross-domain data translation requires detailed specifications especially when it comes to semantic data translation. Building Information Modeling (BIM) and Geographic Information System (GIS) are the two technologies to capture and store construction data for indoor structure and outdoor environment respectively. In the absence of a standard format for data exchanges between the construction and robotic domains, the tools of both industries are yet to be integrated in a coherent deployment infrastructure. Hence, the semantics of BIM-GIS cannot be automatically integrated by any robotic platform. To enable semantic data transfer across domains, semantic web technology has been widely used in multidisciplinary areas for interoperability. We exploit it to pave the way to a smarter, quicker and more precise robot navigation on job-sites. This paper develops a semantic web ontology integrating robot navigation and data collection to convey the meanings from BIM-GIS to the robot. The proposed Building Information Robotic System (BIRS) provides construction data that are semantically transferred to the robotic platform and can be used by the robot navigation software stack on construction sites. To reach this objective, we first need to bridge the knowledge representation between construction and robotic domains. Then, we develop a semantic database to integrate with Robot Operating System (ROS) which can communicate with the robot and the navigation system in order to provide the robot with semantic building data at each step of data collection. Finally, the proposed system is validated through a case study.
A polynomial solution to the inverse kinematic problem of the Kinova Gen3 Lite robot is proposed in this paper. This serial robot is based on a 6R kinematic chain and is not wrist-partitioned. We first start from the forward kinematics equation providing the position and orientation of the end-effector, finally, the univariate polynomial equation is given as a function of the first joint variable $\theta_{1}$. The remaining joint variables are computed by back substitution. Thus, an unique set of joint position is obtain for each root of the univariate equation. Numerical examples, simulated in ROS (Robot Operating System), are given to validate the results, which are compared to the coordinates obtained with MoveIt! and with the actual robot. A procedure to choose an optimum posture of the robot is also proposed.
The coordination of robot swarms - large decentralized teams of robots - generally relies on robust and efficient inter-robot communication. Maintaining communication between robots is particularly challenging in field deployments. Unstructured environments, limited computational resources, low bandwidth, and robot failures all contribute to the complexity of connectivity maintenance. In this paper, we propose a novel lightweight algorithm to navigate a group of robots in complex environments while maintaining connectivity by building a chain of robots. The algorithm is robust to single robot failures and can heal broken communication links. The algorithm works in 3D environments: when a region is unreachable by wheeled robots, the chain is extended with flying robots. We test the performance of the algorithm using up to 100 robots in a physics-based simulator with three mazes and different robot failure scenarios. We then validate the algorithm with physical platforms: 7 wheeled robots and 6 flying ones, in homogeneous and heterogeneous scenarios.
Redundancy and parallelism make decentralized multi-robot systems appealing solutions for the exploration of extreme environments. However, effective cooperation often requires team-wide connectivity and a carefully designed communication strategy. Several recently proposed decentralized connectivity maintenance approaches exploit elegant algebraic results drawn from spectral graph theory. Yet, these proposals are rarely taken beyond simulations or laboratory implementations. In this work, we present two major contributions: (i) we describe the full-stack implementation---from hardware to software---of a decentralized control law for robust connectivity maintenance; and (ii) we assess, in the field, our setup's ability to correctly exchange all the necessary information required to maintain connectivity in a team of quadcopters.