Alert button
Picture for Linqi Ye

Linqi Ye

Alert button

From Knowing to Doing: Learning Diverse Motor Skills through Instruction Learning

Sep 17, 2023
Linqi Ye, Jiayi Li, Yi Cheng, Xianhao Wang, Bin Liang, Yan Peng

Recent years have witnessed many successful trials in the robot learning field. For contact-rich robotic tasks, it is challenging to learn coordinated motor skills by reinforcement learning. Imitation learning solves this problem by using a mimic reward to encourage the robot to track a given reference trajectory. However, imitation learning is not so efficient and may constrain the learned motion. In this paper, we propose instruction learning, which is inspired by the human learning process and is highly efficient, flexible, and versatile for robot motion learning. Instead of using a reference signal in the reward, instruction learning applies a reference signal directly as a feedforward action, and it is combined with a feedback action learned by reinforcement learning to control the robot. Besides, we propose the action bounding technique and remove the mimic reward, which is shown to be crucial for efficient and flexible learning. We compare the performance of instruction learning with imitation learning, indicating that instruction learning can greatly speed up the training process and guarantee learning the desired motion correctly. The effectiveness of instruction learning is validated through a bunch of motion learning examples for a biped robot and a quadruped robot, where skills can be learned typically within several million steps. Besides, we also conduct sim-to-real transfer and online learning experiments on a real quadruped robot. Instruction learning has shown great merits and potential, making it a promising alternative for imitation learning.

Viaarxiv icon

Visuotactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling

May 11, 2023
Shoujie Li, Mingshan He, Wenbo Ding, Linqi Ye, Xueqian Wang, Junbo Tan, Jinqiu Yuan, Xiao-Ping Zhang

Figure 1 for Visuotactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling
Figure 2 for Visuotactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling
Figure 3 for Visuotactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling
Figure 4 for Visuotactile Sensor Enabled Pneumatic Device Towards Compliant Oropharyngeal Swab Sampling

Manual oropharyngeal (OP) swab sampling is an intensive and risky task. In this article, a novel OP swab sampling device of low cost and high compliance is designed by combining the visuo-tactile sensor and the pneumatic actuator-based gripper. Here, a concave visuo-tactile sensor called CoTac is first proposed to address the problems of high cost and poor reliability of traditional multi-axis force sensors. Besides, by imitating the doctor's fingers, a soft pneumatic actuator with a rigid skeleton structure is designed, which is demonstrated to be reliable and safe via finite element modeling and experiments. Furthermore, we propose a sampling method that adopts a compliant control algorithm based on the adaptive virtual force to enhance the safety and compliance of the swab sampling process. The effectiveness of the device has been verified through sampling experiments as well as in vivo tests, indicating great application potential. The cost of the device is around 30 US dollars and the total weight of the functional part is less than 0.1 kg, allowing the device to be rapidly deployed on various robotic arms. Videos, hardware, and source code are available at: https://sites.google.com/view/swab-sampling/.

* 8 pages 
Viaarxiv icon

Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds

Nov 30, 2022
Shoujie Li, Haixin Yu, Wenbo Ding, Houde Liu, Linqi Ye, Chongkun Xia, Xueqian Wang, Xiao-Ping Zhang

Figure 1 for Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds
Figure 2 for Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds
Figure 3 for Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds
Figure 4 for Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds

The accurate detection and grasping of transparent objects are challenging but of significance to robots. Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification. First, a multi-scene synthetic grasping dataset generation method with a Gaussian distribution based data annotation is proposed. Besides, a novel grasping network named TGCNN is proposed for grasping position detection, showing good results in both synthetic and real scenes. In tactile calibration, inspired by human grasping, a fully convolutional network based tactile feature extraction method and a central location based adaptive grasping strategy are designed, improving the success rate by 36.7% compared to direct grasping. Furthermore, a visual-tactile fusion method is proposed for transparent objects classification, which improves the classification accuracy by 34%. The proposed framework synergizes the advantages of vision and touch, and greatly improves the grasping efficiency of transparent objects.

Viaarxiv icon

The Simplest Balance Controller for Dynamic Walking

Nov 11, 2022
Linqi Ye, Xueqian Wang, Houde Liu, Bin Liang

Figure 1 for The Simplest Balance Controller for Dynamic Walking
Figure 2 for The Simplest Balance Controller for Dynamic Walking
Figure 3 for The Simplest Balance Controller for Dynamic Walking
Figure 4 for The Simplest Balance Controller for Dynamic Walking

Humans can balance very well during walking, even when perturbed. But it seems difficult to achieve robust walking for bipedal robots. Here we describe the simplest balance controller that leads to robust walking for a linear inverted pendulum (LIP) model. The main idea is to use a linear function of the body velocity to determine the next foot placement, which we call linear foot placement control (LFPC). By using the Poincar\'e map, a balance criterion is derived, which shows that LFPC is stable when the velocity-feedback coefficient is located in a certain range. And that range is much bigger when stepping faster, which indicates "faster stepping, easier to balance". We show that various gaits can be generated by adjusting the controller parameters in LFPC. Particularly, a dead-beat controller is discovered that can lead to steady-state walking in just one step. The effectiveness of LFPC is verified through Matlab simulation as well as V-REP simulation for both 2D and 3D walking. The main feature of LFPC is its simplicity and inherent robustness, which may help us understand the essence of how to maintain balance in dynamic walking.

Viaarxiv icon