In developing mobile robots for exploration on the planetary surface, it is crucial to evaluate the robot's performance, demonstrating the harsh environment in which the robot will actually be deployed. Repeatable experiments in a controlled testing environment that can reproduce various terrain and gravitational conditions are essential. This paper presents the development of a minimal and space-saving indoor testbed, which can simulate steep slopes, uneven terrain, and lower gravity, employing a three-dimensional target tracking mechanism (active xy and passive z) with a counterweight.
* 2 pages, 3 figures, paper submitted to the SII 2024 (IEEE/SICE
International Symposium on System Integration) (Updated references
Developing algorithms for extra-terrestrial robotic exploration has always been challenging. Along with the complexity associated with these environments, one of the main issues remains the evaluation of said algorithms. With the regained interest in lunar exploration, there is also a demand for quality simulators that will enable the development of lunar robots. % In this paper, we explain how we built a Lunar simulator based on Isaac Sim, Nvidia's robotic simulator. In this paper, we propose Omniverse Lunar Robotic-Sim (OmniLRS) that is a photorealistic Lunar simulator based on Nvidia's robotic simulator. This simulation provides fast procedural environment generation, multi-robot capabilities, along with synthetic data pipeline for machine-learning applications. It comes with ROS1 and ROS2 bindings to control not only the robots, but also the environments. This work also performs sim-to-real rock instance segmentation to show the effectiveness of our simulator for image-based perception. Trained on our synthetic data, a yolov8 model achieves performance close to a model trained on real-world data, with 5% performance gap. When finetuned with real data, the model achieves 14% higher average precision than the model trained on real-world data, demonstrating our simulator's photorealism.% to realize sim-to-real. The code is fully open-source, accessible here: https://github.com/AntoineRichard/LunarSim, and comes with demonstrations.
The integration of vision-based frameworks to achieve lunar robot applications faces numerous challenges such as terrain configuration or extreme lighting conditions. This paper presents a generic task pipeline using object detection, instance segmentation and grasp detection, that can be used for various applications by using the results of these vision-based systems in a different way. We achieve a rock stacking task on a non-flat surface in difficult lighting conditions with a very good success rate of 92%. Eventually, we present an experiment to assemble 3D printed robot components to initiate more complex tasks in the future.
The exploration of the lunar poles and the collection of samples from the martian surface are characterized by shorter time windows demanding increased autonomy and speeds. Autonomous mobile robots must intrinsically cope with a wider range of disturbances. Faster off-road navigation has been explored for terrestrial applications but the combined effects of increased speeds and reduced gravity fields are yet to be fully studied. In this paper, we design and demonstrate a novel fully passive suspension design for wheeled planetary robots, which couples a high-range passive rocker with elastic in-wheel coil-over shock absorbers. The design was initially conceived and verified in a reduced-gravity (1.625 m/s$^2$) simulated environment, where three different passive suspension configurations were evaluated against a set of challenges--climbing steep slopes and surmounting unexpected obstacles like rocks and outcrops--and later prototyped and validated in a series of field tests. The proposed mechanically-hybrid suspension proves to mitigate more effectively the negative effects (high-frequency/high-amplitude vibrations and impact loads) of faster locomotion (>1 m/s) over unstructured terrains under varied gravity fields. This lowers the demand on navigation and control systems, impacting the efficiency of exploration missions in the years to come.
Mobility on asteroids by multi-limbed climbing robots is expected to achieve our exploration goals in such challenging environments. We propose a mobility strategy to improve the locomotion safety of climbing robots in such harsh environments that picture extremely low gravity and highly uneven terrain. Our method plans the gait by decoupling the base and limbs' movements and adjusting the main body pose to avoid ground collisions. The proposed approach includes a motion planning that reduces the reactions generated by the robot's movement by optimizing the swinging trajectory and distributing the momentum. Lower motion reactions decrease the pulling forces on the grippers, avoiding the slippage and flotation of the robot. Dynamic simulations and experiments demonstrate that the proposed method could improve the robot's mobility on the surface of asteroids.
* Paper accepted for presentation at the CLAWAR 2023 (26th
International Conference on Climbing and Walking Robots and the Support
Technologies for Mobile Machines) (Updated references formatting)
An emerging paradigm is being embraced in the conceptualization of future planetary exploration missions. Ambitious objectives and increasingly demanding mission constraints stress the importance associated with faster surface mobility. Driving speeds approaching or surpassing 1 m/s have been rarely used and their effect on performance is today unclear. This study presents experimental evidence and preliminary observations on the impact that increasing velocity has on the tractive performance of planetary rovers. Single-wheel driving tests were conducted using two different metallic, grousered wheels-one rigid and one flexible-over two different soils, olivine sand and CaCO3-based silty soil. Experiments were conducted at speeds between 0.01-1 m/s throughout an ample range of slip ratios (5-90%). Three performance metrics were evaluated: drawbar pull coefficient, wheel sinkage, and tractive efficiency. Results showed similar data trends among all the cases investigated. Drawbar pull and tractive efficiency considerably decreased for speeds beyond 0.2 m/s. Wheel sinkage, unlike what published evidence suggested, increased with increasing velocities. The flexible wheel performed the best at 1m/s, exhibiting 2 times higher drawbar pull and efficiency with 18% lower sinkage under low slip conditions. Although similar data trends were obtained, a different wheel-soil interactive behavior was observed when driving over the different soils. Overall, despite the performance reduction experienced at higher velocities, a speed in the range of 0.2-0.3 m/s would enable 5-10 times faster traverses, compared to current rovers driving capability, while only diminishing drawbar pull and efficiency by 7%. The measurements collected and the analysis presented here lay the groundwork for initial stages in the development of new locomotion subsystems for planetary surface exploration. At the same time...
* 15th International Society for Terrain Vehicle Systems (ISTVS)
Conference, Prague, Czech Republic, 2019
Robotic mobility in microgravity is necessary to expand human utilization and exploration of outer space. Bio-inspired multi-legged robots are a possible solution for safe and precise locomotion. However, a dynamic motion of a robot in microgravity can lead to failures due to gripper detachment caused by excessive motion reactions. We propose a novel Reaction-Aware Motion Planning (RAMP) to improve locomotion safety in microgravity, decreasing the risk of losing contact with the terrain surface by reducing the robot's momentum change. RAMP minimizes the swing momentum with a Low-Reaction Swing Trajectory (LRST) while distributing this momentum to the whole body, ensuring zero velocity for the supporting grippers and minimizing motion reactions. We verify the proposed approach with dynamic simulations indicating the capability of RAMP to generate a safe motion without detachment of the supporting grippers, resulting in the robot reaching its specified location. We further validate RAMP in experiments with an air-floating system, demonstrating a significant reduction in reaction forces and improved mobility in microgravity.
* Submitted version of paper accepted for presentation at the 2023 IEEE
International Conference on Robotics and Automation (ICRA)
The visuomotor system of any animal is critical for its survival, and the development of a complex one within humans is large factor in our success as a species on Earth. This system is an essential part of our ability to adapt to our environment. We use this system continuously throughout the day, when picking something up, or walking around while avoiding bumping into objects. Equipping robots with such capabilities will help produce more intelligent locomotion with the ability to more easily understand their surroundings and to move safely. In particular, such capabilities are desirable for traversing the lunar surface, as it is full of hazardous obstacles, such as rocks. These obstacles need to be identified and avoided in real time. This paper seeks to demonstrate the development of a visuomotor system within a robot for navigation and obstacle avoidance, with complex rock shaped objects representing hazards. Our approach uses deep reinforcement learning with only image data. In this paper, we compare the results from several neural network architectures and a preprocessing methodology which includes producing a segmented image and downsampling.
Reinforcement learning (RL) is a promising field to enhance robotic autonomy and decision making capabilities for space robotics, something which is challenging with traditional techniques due to stochasticity and uncertainty within the environment. RL can be used to enable lunar cave exploration with infrequent human feedback, faster and safer lunar surface locomotion or the coordination and collaboration of multi-robot systems. However, there are many hurdles making research challenging for space robotic applications using RL and machine learning, particularly due to insufficient resources for traditional robotics simulators like CoppeliaSim. Our solution to this is an open source modular platform called Reinforcement Learning for Simulation based Training of Robots, or RL STaR, that helps to simplify and accelerate the application of RL to the space robotics research field. This paper introduces the RL STaR platform, and how researchers can use it through a demonstration.