Abstract:Reliable obstacle avoidance in industrial settings demands 3D scene understanding, but widely used 2D LiDAR sensors perceive only a single horizontal slice of the environment, missing critical obstacles above or below the scan plane. We present a teacher-student framework for vision-based mobile robot navigation that eliminates the need for LiDAR sensors. A teacher policy trained via Proximal Policy Optimization (PPO) in NVIDIA Isaac Lab leverages privileged 2D LiDAR observations that account for the full robot footprint to learn robust navigation. The learned behavior is distilled into a student policy that relies solely on monocular depth maps predicted by a fine-tuned Depth Anything V2 model from four RGB cameras. The complete inference pipeline, comprising monocular depth estimation (MDE), policy execution, and motor control, runs entirely onboard an NVIDIA Jetson Orin AGX mounted on a DJI RoboMaster platform, requiring no external computation for inference. In simulation, the student achieves success rates of 82-96.5%, consistently outperforming the standard 2D LiDAR teacher (50-89%). In real-world experiments, the MDE-based student outperforms the 2D LiDAR teacher when navigating around obstacles with complex 3D geometries, such as overhanging structures and low-profile objects, that fall outside the single scan plane of a 2D LiDAR.
Abstract:In this paper, the collision avoidance problem for non-holonomic robots moving at constant linear speeds in the 2-D plane is considered. The maneuvers to avoid collisions are designed using dynamic vortex potential fields (PFs) and their negative gradients; this formulation leads to a reciprocal behaviour between the robots, denoted as being cooperative. The repulsive field is selected as a function of the velocity and position of a robot relative to another and introducing vorticity in its definition guarantees the absence of local minima. Such a repulsive field is activated by a robot only when it is on a collision path with other mobile robots or stationary obstacles. By analysing the kinematics-based engagement dynamics in polar coordinates, it is shown that a cooperative robot is able to avoid collisions with non-cooperating robots, such as stationary and constant velocity robots, as well as those actively seeking to collide with it. Conditions on the PF parameters are identified that ensure collision avoidance for all cases. Experimental results acquired using a mobile robot platform support the theoretical contributions.