In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms. Moreover, this infrastructure can also be used for other computer vision tasks, such as the evaluation of Visual (-Inertial) SLAM dynamic initialization, multi-object tracking, or automatic annotation. Yet, to work optimally, these functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor. In this study, we provide two novel solutions to estimate these calibration parameters. Firstly, we design an offline target-based method with high accuracy and consistency. Spatial-temporal parameters, camera intrinsic, and trajectory are optimized simultaneously. Then, we propose an online target-less method, eliminating the need for a calibration target and enabling the estimation of time-varying spatial-temporal parameters. Additionally, we perform detailed observability analysis for the target-less method. Our theoretical findings regarding observability are validated by simulation experiments and provide explainable guidelines for calibration. Finally, the accuracy and consistency of two proposed methods are evaluated with hand-held real-world datasets where traditional hand-eye calibration method do not work.
Vision-based grasping of unknown objects in unstructured environments is a key challenge for autonomous robotic manipulation. A practical grasp synthesis system is required to generate a diverse set of 6-DoF grasps from which a task-relevant grasp can be executed. Although generative models are suitable for learning such complex data distributions, existing models have limitations in grasp quality, long training times, and a lack of flexibility for task-specific generation. In this work, we present GraspLDM- a modular generative framework for 6-DoF grasp synthesis that uses diffusion models as priors in the latent space of a VAE. GraspLDM learns a generative model of object-centric $SE(3)$ grasp poses conditioned on point clouds. GraspLDM's architecture enables us to train task-specific models efficiently by only re-training a small de-noising network in the low-dimensional latent space, as opposed to existing models that need expensive re-training. Our framework provides robust and scalable models on both full and single-view point clouds. GraspLDM models trained with simulation data transfer well to the real world and provide an 80\% success rate for 80 grasp attempts of diverse test objects, improving over existing generative models. We make our implementation available at https://github.com/kuldeepbrd1/graspldm .
Nowadays, realistic simulation environments are essential to validate and build reliable robotic solutions. This is particularly true when using Reinforcement Learning (RL) based control policies. To this end, both robotics and RL developers need tools and workflows to create physically accurate simulations and synthetic datasets. Gazebo, MuJoCo, Webots, Pybullets or Isaac Sym are some of the many tools available to simulate robotic systems. Developing learning-based methods for space navigation is, due to the highly complex nature of the problem, an intensive data-driven process that requires highly parallelized simulations. When it comes to the control of spacecrafts, there is no easy to use simulation library designed for RL. We address this gap by harnessing the capabilities of NVIDIA Isaac Gym, where both physics simulation and the policy training reside on GPU. Building on this tool, we provide an open-source library enabling users to simulate thousands of parallel spacecrafts, that learn a set of maneuvering tasks, such as position, attitude, and velocity control. These tasks enable to validate complex space scenarios, such as trajectory optimization for landing, docking, rendezvous and more.
This investigation introduces a novel deep reinforcement learning-based suite to control floating platforms in both simulated and real-world environments. Floating platforms serve as versatile test-beds to emulate microgravity environments on Earth. Our approach addresses the system and environmental uncertainties in controlling such platforms by training policies capable of precise maneuvers amid dynamic and unpredictable conditions. Leveraging state-of-the-art deep reinforcement learning techniques, our suite achieves robustness, adaptability, and good transferability from simulation to reality. Our Deep Reinforcement Learning (DRL) framework provides advantages such as fast training times, large-scale testing capabilities, rich visualization options, and ROS bindings for integration with real-world robotic systems. Beyond policy development, our suite provides a comprehensive platform for researchers, offering open-access at https://github.com/elharirymatteo/RANS/tree/ICRA24.
Accurate global localization is crucial for autonomous navigation and planning. To this end, GPS-aided Visual-Inertial Odometry (GPS-VIO) fusion algorithms are proposed in the literature. This paper presents a novel GPS-VIO system that is able to significantly benefit from the online adaptive calibration of the rotational extrinsic parameter between the GPS reference frame and the VIO reference frame. The behind reason is this parameter is observable. This paper provides novel proof through nonlinear observability analysis. We also evaluate the proposed algorithm extensively on diverse platforms, including flying UAV and driving vehicle. The experimental results support the observability analysis and show increased localization accuracy in comparison to state-of-the-art (SOTA) tightly-coupled algorithms.
Developing algorithms for extra-terrestrial robotic exploration has always been challenging. Along with the complexity associated with these environments, one of the main issues remains the evaluation of said algorithms. With the regained interest in lunar exploration, there is also a demand for quality simulators that will enable the development of lunar robots. % In this paper, we explain how we built a Lunar simulator based on Isaac Sim, Nvidia's robotic simulator. In this paper, we propose Omniverse Lunar Robotic-Sim (OmniLRS) that is a photorealistic Lunar simulator based on Nvidia's robotic simulator. This simulation provides fast procedural environment generation, multi-robot capabilities, along with synthetic data pipeline for machine-learning applications. It comes with ROS1 and ROS2 bindings to control not only the robots, but also the environments. This work also performs sim-to-real rock instance segmentation to show the effectiveness of our simulator for image-based perception. Trained on our synthetic data, a yolov8 model achieves performance close to a model trained on real-world data, with 5% performance gap. When finetuned with real data, the model achieves 14% higher average precision than the model trained on real-world data, demonstrating our simulator's photorealism.% to realize sim-to-real. The code is fully open-source, accessible here: https://github.com/AntoineRichard/LunarSim, and comes with demonstrations.
This paper introduces a novel GPS-aided visual-wheel odometry (GPS-VWO) for ground robots. The state estimation algorithm tightly fuses visual, wheeled encoder and GPS measurements in the way of Multi-State Constraint Kalman Filter (MSCKF). To avoid accumulating calibration errors over time, the proposed algorithm calculates the extrinsic rotation parameter between the GPS global coordinate frame and the VWO reference frame online as part of the estimation process. The convergence of this extrinsic parameter is guaranteed by the observability analysis and verified by using real-world visual and wheel encoder measurements as well as simulated GPS measurements. Moreover, a novel theoretical finding is presented that the variance of unobservable state could converge to zero for specific Kalman filter system. We evaluate the proposed system extensively in large-scale urban driving scenarios. The results demonstrate that better accuracy than GPS is achieved through the fusion of GPS and VWO. The comparison between extrinsic parameter calibration and non-calibration shows significant improvement in localization accuracy thanks to the online calibration.
In robotic manipulation, end-effector compliance is an essential precondition for performing contact-rich tasks, such as machining, assembly, and human-robot interaction. Most robotic arms are position-controlled stiff systems at a hardware level. Thus, adding compliance becomes essential. Compliance in those systems has been recently achieved using Forward dynamics compliance control (FDCC), which, owing to its virtual forward dynamics model, can be implemented on both position and velocity-controlled robots. This paper evaluates the choice of control interface (and hence the control domain), which, although considered trivial, is essential due to differences in their characteristics. In some cases, the choice is restricted to the available hardware interface. However, given the option to choose, the velocity-based control interface makes a better candidate for compliance control because of smoother compliant behaviour, reduced interaction forces, and work done. To prove these points, in this paper FDCC is evaluated on the UR10e six-DOF manipulator with velocity and position control modes. The evaluation is based on force-control benchmarking metrics using 3D-printed artefacts. Real experiments favour the choice of velocity control over position control.
The paper presents a novel Hardware-In-the-Loop (HIL) emulation framework of on-orbit interactions using on-ground robotic manipulators. It combines Virtual Forward Dynamic Model (VFDM) for Cartesian motion control of robotic manipulators with an Orbital Dynamics Simulator (ODS) based on the Clohessy Wiltshire (CW) Model. VFDM-based Inverse Kinematics (IK) solver is known to have better motion tracking, path accuracy, and solver convergency than traditional IK solvers. Therefore it provides a stable Cartesian motion for manipulator-based HIL on-orbit emulations. The framework is tested on a ROS-based robotics testbed to emulate two scenarios: free-floating satellite motion and free-floating interaction (collision). Mock-ups of two satellites are mounted at the robots' end-effectors. Forces acting on the mock-ups are measured through an in-built F/T sensor on each robotic arm. During the tests, the relative motion of the mock-ups is expressed with respect to a moving observer rotating at a fixed angular velocity in a circular orbit rather than their motion in the inertial frame. The ODS incorporates the force and torque values on the fly and delivers the corresponding satellite motions to the virtual forward dynamics model as online trajectories. Results are comparable to other free-floating HIL emulators. Fidelity between the simulated motion and robot-mounted mock-up motion is confirmed.
Extraterrestrial rovers with a general-purpose robotic arm have many potential applications in lunar and planetary exploration. Introducing autonomy into such systems is desirable for increasing the time that rovers can spend gathering scientific data and collecting samples. This work investigates the applicability of deep reinforcement learning for vision-based robotic grasping of objects on the Moon. A novel simulation environment with procedurally-generated datasets is created to train agents under challenging conditions in unstructured scenes with uneven terrain and harsh illumination. A model-free off-policy actor-critic algorithm is then employed for end-to-end learning of a policy that directly maps compact octree observations to continuous actions in Cartesian space. Experimental evaluation indicates that 3D data representations enable more effective learning of manipulation skills when compared to traditionally used image-based observations. Domain randomization improves the generalization of learned policies to novel scenes with previously unseen objects and different illumination conditions. To this end, we demonstrate zero-shot sim-to-real transfer by evaluating trained agents on a real robot in a Moon-analogue facility.