Alert button
Picture for Chris Lehnert

Chris Lehnert

Alert button

Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments

Sep 17, 2023
Ben Burgess-Limerick, Jesse Haviland, Chris Lehnert, Peter Corke

Figure 1 for Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments
Figure 2 for Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments
Figure 3 for Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments
Figure 4 for Reactive Base Control for On-The-Move Mobile Manipulation in Dynamic Environments

We present a reactive base control method that enables high performance mobile manipulation on-the-move in environments with static and dynamic obstacles. Performing manipulation tasks while the mobile base remains in motion can significantly decrease the time required to perform multi-step tasks, as well as improve the gracefulness of the robot's motion. Existing approaches to manipulation on-the-move either ignore the obstacle avoidance problem or rely on the execution of planned trajectories, which is not suitable in environments with dynamic objects and obstacles. The presented controller addresses both of these deficiencies and demonstrates robust performance of pick-and-place tasks in dynamic environments. The performance is evaluated on several simulated and real-world tasks. On a real-world task with static obstacles, we outperform an existing method by 48\% in terms of total task time. Further, we present real-world examples of our robot performing manipulation tasks on-the-move while avoiding a second autonomous robot in the workspace. See https://benburgesslimerick.github.io/MotM-BaseControl for supplementary materials.

Viaarxiv icon

An Architecture for Reactive Mobile Manipulation On-The-Move

Dec 14, 2022
Ben Burgess-Limerick, Chris Lehnert, Jurgen Leitner, Peter Corke

Figure 1 for An Architecture for Reactive Mobile Manipulation On-The-Move
Figure 2 for An Architecture for Reactive Mobile Manipulation On-The-Move
Figure 3 for An Architecture for Reactive Mobile Manipulation On-The-Move
Figure 4 for An Architecture for Reactive Mobile Manipulation On-The-Move

We present a generalised architecture for reactive mobile manipulation while a robot's base is in motion toward the next objective in a high-level task. By performing tasks on-the-move, overall cycle time is reduced compared to methods where the base pauses during manipulation. Reactive control of the manipulator enables grasping objects with unpredictable motion while improving robustness against perception errors, environmental disturbances, and inaccurate robot control compared to open-loop, trajectory-based planning approaches. We present an example implementation of the architecture and investigate the performance on a series of pick and place tasks with both static and dynamic objects and compare the performance to baseline methods. Our method demonstrated a real-world success rate of over 99%, failing in only a single trial from 120 attempts with a physical robot system. The architecture is further demonstrated on other mobile manipulator platforms in simulation. Our approach reduces task time by up to 48%, while also improving reliability, gracefulness, and predictability compared to existing architectures for mobile manipulation. See https://benburgesslimerick.github.io/ManipulationOnTheMove for supplementary materials.

Viaarxiv icon

Developing cooperative policies for multi-stage reinforcement learning tasks

May 11, 2022
Jordan Erskine, Chris Lehnert

Figure 1 for Developing cooperative policies for multi-stage reinforcement learning tasks
Figure 2 for Developing cooperative policies for multi-stage reinforcement learning tasks
Figure 3 for Developing cooperative policies for multi-stage reinforcement learning tasks
Figure 4 for Developing cooperative policies for multi-stage reinforcement learning tasks

Many hierarchical reinforcement learning algorithms utilise a series of independent skills as a basis to solve tasks at a higher level of reasoning. These algorithms don't consider the value of using skills that are cooperative instead of independent. This paper proposes the Cooperative Consecutive Policies (CCP) method of enabling consecutive agents to cooperatively solve long time horizon multi-stage tasks. This method is achieved by modifying the policy of each agent to maximise both the current and next agent's critic. Cooperatively maximising critics allows each agent to take actions that are beneficial for its task as well as subsequent tasks. Using this method in a multi-room maze domain and a peg in hole manipulation domain, the cooperative policies were able to outperform a set of naive policies, a single agent trained across the entire domain, as well as another sequential HRL algorithm.

* This paper supersedes the rejected paper "Developing cooperative policies for multi-stage tasks". arXiv admin note: substantial text overlap with arXiv:2007.00203 
Viaarxiv icon

Eyes on the Prize: Improved Perception for Robust Dynamic Grasping

Apr 29, 2022
Ben Burgess-Limerick, Chris Lehnert, Jurgen Leitner, Peter Corke

Figure 1 for Eyes on the Prize: Improved Perception for Robust Dynamic Grasping
Figure 2 for Eyes on the Prize: Improved Perception for Robust Dynamic Grasping
Figure 3 for Eyes on the Prize: Improved Perception for Robust Dynamic Grasping
Figure 4 for Eyes on the Prize: Improved Perception for Robust Dynamic Grasping

This paper is concerned with perception challenges for robust grasping in the presence of clutter and unpredictable relative motion between robot and object. Traditional perception systems developed for static grasping are unable to provide feedback during the final phase of a grasp due to sensor minimum range, occlusion, and a limited field of view. A multi-camera eye-in-hand perception system is presented that has advantages over commonly used camera configurations. We quantitatively evaluate the performance on a real robot with an image-based visual servoing grasp controller and show a significantly improved success rate on a dynamic grasping task. A fully reproducible open-source testing system is described to encourage benchmarking of dynamic grasping system performance.

* Dynamic grasping benchmark available: https://github.com/BenBurgessLimerick/dynamic_grasping_benchmark 
Viaarxiv icon

Combining Local and Global Viewpoint Planning for Fruit Coverage

Aug 18, 2021
Tobias Zaenker, Chris Lehnert, Chris McCool, Maren Bennewitz

Figure 1 for Combining Local and Global Viewpoint Planning for Fruit Coverage
Figure 2 for Combining Local and Global Viewpoint Planning for Fruit Coverage
Figure 3 for Combining Local and Global Viewpoint Planning for Fruit Coverage
Figure 4 for Combining Local and Global Viewpoint Planning for Fruit Coverage

Obtaining 3D sensor data of complete plants or plant parts (e.g., the crop or fruit) is difficult due to their complex structure and a high degree of occlusion. However, especially for the estimation of the position and size of fruits, it is necessary to avoid occlusions as much as possible and acquire sensor information of the relevant parts. Global viewpoint planners exist that suggest a series of viewpoints to cover the regions of interest up to a certain degree, but they usually prioritize global coverage and do not emphasize the avoidance of local occlusions. On the other hand, there are approaches that aim at avoiding local occlusions, but they cannot be used in larger environments since they only reach a local maximum of coverage. In this paper, we therefore propose to combine a local, gradient-based method with global viewpoint planning to enable local occlusion avoidance while still being able to cover large areas. Our simulated experiments with a robotic arm equipped with a camera array as well as an RGB-D camera show that this combination leads to a significantly increased coverage of the regions of interest compared to just applying global coverage planning.

* 7 pages, 7 figures, accepted at ECMR 2021. arXiv admin note: text overlap with arXiv:2011.00275 
Viaarxiv icon

Developing cooperative policies for multi-stage tasks

Jul 01, 2020
Jordan Erskine, Chris Lehnert

Figure 1 for Developing cooperative policies for multi-stage tasks
Figure 2 for Developing cooperative policies for multi-stage tasks
Figure 3 for Developing cooperative policies for multi-stage tasks
Figure 4 for Developing cooperative policies for multi-stage tasks

This paper proposes the Cooperative Soft Actor Critic (CSAC) method of enabling consecutive reinforcement learning agents to cooperatively solve a long time horizon multi-stage task. This method is achieved by modifying the policy of each agent to maximise both the current and next agent's critic. Cooperatively maximising each agent's critic allows each agent to take actions that are beneficial for its task as well as subsequent tasks. Using this method in a multi-room maze domain, the cooperative policies were able to outperform both uncooperative policies as well as a single agent trained across the entire domain. CSAC achieved a success rate of at least 20\% higher than the uncooperative policies, and converged on a solution at least 4 times faster than the single agent.

* This paper was submitted to RA-L on June 20th 2020 
Viaarxiv icon

Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments

Aug 05, 2019
Paul Zapotezny-Anderson, Chris Lehnert

Figure 1 for Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments
Figure 2 for Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments
Figure 3 for Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments
Figure 4 for Towards Active Robotic Vision in Agriculture: A Deep Learning Approach to Visual Servoing in Occluded and Unstructured Protected Cropping Environments

3D Move To See (3DMTS) is a mutli-perspective visual servoing method for unstructured and occluded environments, like that encountered in robotic crop harvesting. This paper presents a deep learning method, Deep-3DMTS for creating a single-perspective approach for 3DMTS through the use of a Convolutional Neural Network (CNN). The novel method is developed and validated via simulation against the standard 3DMTS approach. The Deep-3DMTS approach is shown to have performance equivalent to the standard 3DMTS baseline in guiding the end effector of a robotic arm to improve the view of occluded fruit (sweet peppers): end effector final position within 11.4 mm of the baseline; and an increase in fruit size in the image by a factor of 17.8 compared to the baseline of 16.8 (avg.).

* 6 pages, 6 figures, 3 tables 
Viaarxiv icon

A Sweet Pepper Harvesting Robot for Protected Cropping Environments

Oct 29, 2018
Chris Lehnert, Chris McCool, Inkyu Sa, Tristan Perez

Figure 1 for A Sweet Pepper Harvesting Robot for Protected Cropping Environments
Figure 2 for A Sweet Pepper Harvesting Robot for Protected Cropping Environments
Figure 3 for A Sweet Pepper Harvesting Robot for Protected Cropping Environments
Figure 4 for A Sweet Pepper Harvesting Robot for Protected Cropping Environments

Using robots to harvest sweet peppers in protected cropping environments has remained unsolved despite considerable effort by the research community over several decades. In this paper, we present the robotic harvester, Harvey, designed for sweet peppers in protected cropping environments that achieved a 76.5% success rate (within a modified scenario) which improves upon our prior work which achieved 58% and related sweet pepper harvesting work which achieved 33\%. This improvement was primarily achieved through the introduction of a novel peduncle segmentation system using an efficient deep convolutional neural network, in conjunction with 3D post-filtering to detect the critical cutting location. We benchmark the peduncle segmentation against prior art demonstrating a considerable improvement in performance with an F_1 score of 0.564 compared to 0.302. The robotic harvester uses a perception pipeline to detect a target sweet pepper and an appropriate grasp and cutting pose used to determine the trajectory of a multi-modal harvesting tool to grasp the sweet pepper and cut it from the plant. A novel decoupling mechanism enables the gripping and cutting operations to be performed independently. We perform an in-depth analysis of the full robotic harvesting system to highlight bottlenecks and failure points that future work could address.

Viaarxiv icon

3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation

Sep 21, 2018
Chris Lehnert, Dorian Tsai, Anders Eriksson, Chris McCool

Figure 1 for 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation
Figure 2 for 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation
Figure 3 for 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation
Figure 4 for 3D Move to See: Multi-perspective visual servoing for improving object views with semantic segmentation

In this paper, we present a new approach to visual servoing for robotics, referred to as 3D Move to See (3DMTS), based on the principle of finding the next best view using a 3D camera array and a robotic manipulator to obtain multiple samples of the scene from different perspectives. The method uses semantic vision and an objective function applied to each perspective to sample a gradient representing the direction of the next best view. The method is demonstrated within simulation and on a real robotic platform containing a custom 3D camera array for the challenging scenario of robotic harvesting in a highly occluded and unstructured environment. It was shown on a real robotic platform that by moving the end effector using the gradient of an objective function leads to a locally optimal view of the object of interest, even amongst occlusions. The overall performance of the 3DMTS method obtained a mean increase in target size by 29.3% compared to a baseline method using a single RGB-D camera, which obtained 9.17%. The results demonstrate qualitatively and quantitatively that the 3DMTS method performed better in most scenarios, and yielded three times the target size compared to the baseline method. The increased target size in the final view will improve the detection of key features of the object of interest for further manipulation, such as grasping and harvesting.

Viaarxiv icon

In-Field Peduncle Detection of Sweet Peppers for Robotic Harvesting: a comparative study

Sep 29, 2017
Chris Lehnert, Chris McCool, Tristan Perez

Figure 1 for In-Field Peduncle Detection of Sweet Peppers for Robotic Harvesting: a comparative study
Figure 2 for In-Field Peduncle Detection of Sweet Peppers for Robotic Harvesting: a comparative study
Figure 3 for In-Field Peduncle Detection of Sweet Peppers for Robotic Harvesting: a comparative study
Figure 4 for In-Field Peduncle Detection of Sweet Peppers for Robotic Harvesting: a comparative study

Robotic harvesting of crops has the potential to disrupt current agricultural practices. A key element to enabling robotic harvesting is to safely remove the crop from the plant which often involves locating and cutting the peduncle, the part of the crop that attaches it to the main stem of the plant. In this paper we present a comparative study of two methods for performing peduncle detection. The first method is based on classic colour and geometric features obtained from the scene with a support vector machine classifier, referred to as PFH-SVM. The second method is an efficient deep neural network approach, MiniInception, that is able to be deployed on a robotic platform. In both cases we employ a secondary filtering process that enforces reasonable assumptions about the crop structure, such as the proximity of the peduncle to the crop. Our tests are conducted on Harvey, a sweet pepper harvesting robot, and is evaluated in a greenhouse using two varieties of sweet pepper, Ducati and Mercuno. We demonstrate that the MiniInception method achieves impressive accuracy and considerably outperforms the PFH-SVM approach achieving an F1 score of 0.564 and 0.302 respectively.

* Submitted to International Conference on Robotics and Automation 2018, under review 
Viaarxiv icon