Many real-world sequential manipulation tasks involve a combination of discrete symbolic search and continuous motion planning, collectively known as combined task and motion planning (TAMP). However, prevailing methods often struggle with the computational burden and intricate combinatorial challenges stemming from the multitude of action skeletons. To address this, we propose Dynamic Logic-Geometric Program (D-LGP), a novel approach integrating Dynamic Tree Search and global optimization for efficient hybrid planning. Through empirical evaluation on three benchmarks, we demonstrate the efficacy of our approach, showcasing superior performance in comparison to state-of-the-art techniques. We validate our approach through simulation and demonstrate its capability for online replanning under uncertainty and external disturbances in the real world.
In this work, we propose to learn robot geometry as distance fields (RDF), which extend the signed distance field (SDF) of the robot with joint configurations. Unlike existing methods that learn an implicit representation encoding joint space and Euclidean space together, the proposed RDF approach leverages the kinematic chain of the robot, which reduces the dimensionality and complexity of the problem, resulting in more accurate and reliable SDFs. A simple and flexible approach that exploits basis functions to represent SDFs for individual robot links is presented, providing a smoother representation and improved efficiency compared to neural networks. RDF is naturally continuous and differentiable, enabling its direct integration as cost functions in robot tasks. It also allows us to obtain high-precision robot surface points with any desired spatial resolution, with the capability of whole-body manipulation. We verify the effectiveness of our RDF representation by conducting various experiments in both simulations and with the 7-axis Franka Emika robot. We compare our approach against baseline methods and demonstrate its efficiency in dual-arm settings for tasks involving collision avoidance and whole-body manipulation. Project page: https://sites.google.com/view/lrdf/home}{https://sites.google.com/view/lrdf/home