Alert button
Picture for Xiaoman Wang

Xiaoman Wang

Alert button

Object grasping planning for the situation when soft and rigid objects are mixed together

Sep 20, 2019
Xiaoman Wang, Xin Jiang, Jie Zhao, Shengfan Wang, Yunhui Liu

Figure 1 for Object grasping planning for the situation when soft and rigid objects are mixed together
Figure 2 for Object grasping planning for the situation when soft and rigid objects are mixed together
Figure 3 for Object grasping planning for the situation when soft and rigid objects are mixed together
Figure 4 for Object grasping planning for the situation when soft and rigid objects are mixed together

In this paper, we propose a object detection method expressed as rotated bounding box to solve grasping challenge in the scenes where rigid objects and soft objects are mixed together. Compared with traditional detection methods, this method can output the angle information of rotated objects and thus can guarantee that within each rotated bounding box, there is a single instance. This technology is especially useful in the case of pile of objects with different orientations. In our method, when uncategorized objects with specific geometry shapes (rectangle or cylinder) are detected, the program will conclude that some rigid objects are covered by the towels. If no covered objects are detected, the grasp planning is based on 3D point cloud obtained from the mapping between 2D object detection result and its corresponding 3D point cloud. Based on the information provided by the 3D bounding box covering the object, grasping strategy for multiple cluttered rigid objects, collision avoidance strategy are proposed. The proposed method is verified by the experiment in which rigid objects and towels are mixed together.

* submitted in ICRA2020 
Viaarxiv icon

Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper

Sep 19, 2019
Jie Zhao, Xin Jiang, Xiaoman Wang, Shengfan Wang, Yunhui Liu

Figure 1 for Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper
Figure 2 for Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper
Figure 3 for Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper
Figure 4 for Assembly of randomly placed parts realized by using only one robot arm with a general parallel-jaw gripper

In industry assembly lines, parts feeding machines are widely employed as the prologue of the whole procedure. They play the role of sorting the parts randomly placed in bins to the state with specified pose. With the help of the parts feeding machines, the subsequent assembly processes by robot arm can always start from the same condition. Thus it is expected that function of parting feeding machine and the robotic assembly can be integrated with one robot arm. This scheme can provide great flexibility and can also contribute to reduce the cost. The difficulties involved in this scheme lie in the fact that in the part feeding phase, the pose of the part after grasping may be not proper for the subsequent assembly. Sometimes it can not even guarantee a stable grasp. In this paper, we proposed a method to integrate parts feeding and assembly within one robot arm. This proposal utilizes a specially designed gripper tip mounted on the jaws of a two-fingered gripper. With the modified gripper, in-hand manipulation of the grasped object is realized, which can ensure the control of the orientation and offset position of the grasped object. The proposal in this paper is verified by a simulated assembly in which a robot arm completed the assembly process including parts picking from bin and a subsequent peg-in-hole assembly.

* Submitted in ICRA 2020 
Viaarxiv icon

Vision Based Picking System for Automatic Express Package Dispatching

Apr 09, 2019
Shengfan Wang, Xin Jiang, Jie Zhao, Xiaoman Wang, Weiguo Zhou, Yunhui Liu

Figure 1 for Vision Based Picking System for Automatic Express Package Dispatching
Figure 2 for Vision Based Picking System for Automatic Express Package Dispatching
Figure 3 for Vision Based Picking System for Automatic Express Package Dispatching
Figure 4 for Vision Based Picking System for Automatic Express Package Dispatching

This paper presents a vision based robotic system to handle the picking problem involved in automatic express package dispatching. By utilizing two RealSense RGB-D cameras and one UR10 industrial robot, package dispatching task which is usually done by human can be completed automatically. In order to determine grasp point for overlapped deformable objects, we improved the sampling algorithm proposed by the group in Berkeley to directly generate grasp candidate from depth images. For the purpose of package recognition, the deep network framework YOLO is integrated. We also designed a multi-modal robot hand composed of a two-fingered gripper and a vacuum suction cup to deal with different kinds of packages. All the technologies have been integrated in a work cell which simulates the practical conditions of an express package dispatching scenario. The proposed system is verified by experiments conducted for two typical express items.

* The 2019 IEEE International Conference on Real-time Computing and Robotics 
Viaarxiv icon

Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images

Feb 24, 2019
Shengfan Wang, Xin Jiang, Jie Zhao, Xiaoman Wang, Weiguo Zhou, Yunhui Liu, Fellow, IEEE

Figure 1 for Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images
Figure 2 for Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images
Figure 3 for Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images
Figure 4 for Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images

This paper presents an efficient neural network model to generate robotic grasps with high resolution images. The proposed model uses fully convolution neural network to generate robotic grasps for each pixel using 400 $\times$ 400 high resolution RGB-D images. It first down-sample the images to get features and then up-sample those features to the original size of the input as well as combines local and global features from different feature maps. Compared to other regression or classification methods for detecting robotic grasps, our method looks more like the segmentation methods which solves the problem through pixel-wise ways. We use Cornell Grasp Dataset to train and evaluate the model and get high accuracy about 94.42% for image-wise and 91.02% for object-wise and fast prediction time about 8ms. We also demonstrate that without training on the multiple objects dataset, our model can directly output robotic grasps candidates for different objects because of the pixel wise implementation.

* Submitted to The 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019) 
Viaarxiv icon