The remarkable athletic intelligence displayed by humans in complex dynamic movements such as dancing and gymnastics suggests that the balance mechanism in biological beings is decoupled from specific movement patterns. This decoupling allows for the execution of both learned and unlearned movements under certain constraints while maintaining balance through minor whole-body coordination. To replicate this balance ability and body agility, this paper proposes a versatile controller for bipedal robots. This controller achieves ankle and body trajectory tracking across a wide range of gaits using a single small-scale neural network, which is based on a model-based IK solver and reinforcement learning. We consider a single step as the smallest control unit and design a universally applicable control input form suitable for any single-step variation. Highly flexible gait control can be achieved by combining these minimal control units with high-level policy through our extensible control interface. To enhance the trajectory-tracking capability of our controller, we utilize a three-stage training curriculum. After training, the robot can move freely between target footholds at varying distances and heights. The robot can also maintain static balance without repeated stepping to adjust posture. Finally, we evaluate the tracking accuracy of our controller on various bipedal tasks, and the effectiveness of our control framework is verified in the simulation environment.
Combining the mobility of legged robots with the manipulation skills of arms has the potential to significantly expand the operational range and enhance the capabilities of robotic systems in performing various mobile manipulation tasks. Existing approaches are confined to imprecise six degrees of freedom (DoF) manipulation and possess a limited arm workspace. In this paper, we propose a novel framework, RoboDuet, which employs two collaborative policies to realize locomotion and manipulation simultaneously, achieving whole-body control through interactions between each other. Surprisingly, going beyond the large-range pose tracking, we find that the two-policy framework may enable cross-embodiment deployment such as using different quadrupedal robots or other arms. Our experiments demonstrate that the policies trained through RoboDuet can accomplish stable gaits, agile 6D end-effector pose tracking, and zero-shot exchange of legged robots, and can be deployed in the real world to perform various mobile manipulation tasks. Our project page with demo videos is at https://locomanip-duet.github.io .
The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.
Humans can balance very well during walking, even when perturbed. But it seems difficult to achieve robust walking for bipedal robots. Here we describe the simplest balance controller that leads to robust walking for a linear inverted pendulum (LIP) model. The main idea is to use a linear function of the body velocity to determine the next foot placement, which we call linear foot placement control (LFPC). By using the Poincar\'e map, a balance criterion is derived, which shows that LFPC is stable when the velocity-feedback coefficient is located in a certain range. And that range is much bigger when stepping faster, which indicates "faster stepping, easier to balance". We show that various gaits can be generated by adjusting the controller parameters in LFPC. Particularly, a dead-beat controller is discovered that can lead to steady-state walking in just one step. The effectiveness of LFPC is verified through Matlab simulation as well as V-REP simulation for both 2D and 3D walking. The main feature of LFPC is its simplicity and inherent robustness, which may help us understand the essence of how to maintain balance in dynamic walking.
Continuum robots are typically slender and flexible with infinite freedoms in theory, which poses a challenge for their control and application. The shape sensing of continuum robots is vital to realise accuracy control. This letter proposed a novel general real-time shape sensing framework of continuum robots based on the piecewise polynomial curvature (PPC) kinematics model. We illustrate the coupling between orientation and position at any given location of the continuum robots. Further, the coupling relation could be bridged by the PPC kinematics. Therefore, we propose to estimate the shape of continuum robots through orientation estimation, using the off-the-shelf orientation sensors, e.g., IMUs, mounted on certain locations. The approach gives a valuable framework to the shape sensing of continuum robots, universality, accuracy and convenience. The accuracy of the general approach is verified in the experiments of multi-type physical prototypes.
The continuum robot has attracted more attention for its flexibility. Continuum robot kinematics models are the basis for further perception, planning, and control. The design and research of continuum robots are usually based on the assumption of piecewise constant curvature (PCC). However, due to the influence of friction, etc., the actual motion of the continuum robot is approximate piecewise constant curvature (APCC). To address this, we present a kinematic equivalent model for continuum robots, i.e. APCC 2L-5R. Using classical rigid linkages to replace the original model in kinematic, the APCC 2L-5R model effectively reduces complexity and improves numerical stability. Furthermore, based on the model, the configuration self-estimation of the continuum robot is realized by monocular cameras installed at the end of each approximate constant curvature segment. The potential of APCC 2L-5R in perception, planning, and control of continuum robots remains to be explored.