Alert button
Picture for Che Ellis

Che Ellis

Alert button

Learned Visual Navigation for Under-Canopy Agricultural Robots

Jul 06, 2021
Arun Narenthiran Sivakumar, Sahil Modi, Mateus Valverde Gasparino, Che Ellis, Andres Eduardo Baquero Velasquez, Girish Chowdhary, Saurabh Gupta

Figure 1 for Learned Visual Navigation for Under-Canopy Agricultural Robots
Figure 2 for Learned Visual Navigation for Under-Canopy Agricultural Robots
Figure 3 for Learned Visual Navigation for Under-Canopy Agricultural Robots
Figure 4 for Learned Visual Navigation for Under-Canopy Agricultural Robots

We describe a system for visually guided autonomous navigation of under-canopy farm robots. Low-cost under-canopy robots can drive between crop rows under the plant canopy and accomplish tasks that are infeasible for over-the-canopy drones or larger agricultural equipment. However, autonomously navigating them under the canopy presents a number of challenges: unreliable GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to leaves and weeds, and large variability in appearance over the season and across crop types. We address these challenges by building a modular system that leverages machine learning for robust and generalizable perception from monocular RGB images from low-cost cameras, and model predictive control for accurate control in challenging terrain. Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average, outperforming a state-of-the-art LiDAR based system (286 meters per intervention) in extensive field testing spanning over 25 km.

* RSS 2021. Project website with data and videos: https://ansivakumar.github.io/learned-visual-navigation/ 
Viaarxiv icon