Alert button
Picture for Dandan Zhang

Dandan Zhang

Alert button

Engineering Mathematics, University of Bristol, affiliated with the Bristol Robotics Lab, United Kingdom

TacFR-Gripper: A Reconfigurable Fin Ray-Based Compliant Robotic Gripper with Tactile Skin for In-Hand Manipulation

Nov 17, 2023
Qingzheng Cong, Wen Fan, Dandan Zhang

This paper introduces the TacFR-Gripper, a reconfigurable Fin Ray-based soft and compliant robotic gripper equipped with tactile skin, which can be used for dexterous in-hand manipulation tasks. This gripper can adaptively grasp objects of diverse shapes and stiffness levels. An array of Force Sensitive Resistor (FSR) sensors is embedded within the robotic finger to serve as the tactile skin, enabling the robot to perceive contact information during manipulation. We provide theoretical analysis for gripper design, including kinematic analysis, workspace analysis, and finite element analysis to identify the relationship between the gripper's load and its deformation. Moreover, we implemented a Graph Neural Network (GNN)-based tactile perception approach to enable reliable grasping without accidental slip or excessive force. Three physical experiments were conducted to quantify the performance of the TacFR-Gripper. These experiments aimed to i) assess the grasp success rate across various everyday objects through different configurations, ii) verify the effectiveness of tactile skin with the GNN algorithm in grasping, iii) evaluate the gripper's in-hand manipulation capabilities for object pose control. The experimental results indicate that the TacFR-Gripper can grasp a wide range of complex-shaped objects with a high success rate and deliver dexterous in-hand manipulation. Additionally, the integration of tactile skin with the GNN algorithm enhances grasp stability by incorporating tactile feedback during manipulations. For more details of this project, please view our website: https://sites.google.com/view/tacfr-gripper/homepage.

Viaarxiv icon

Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

Aug 02, 2023
Yijiong Lin, Mauro Comi, Alex Church, Dandan Zhang, Nathan F. Lepora

Figure 1 for Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 2 for Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 3 for Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 4 for Attention for Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks. However, the deployment of such tasks in unstructured environments remains under-investigated. To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision. In analogy to visual saliency, this concept involves identifying key information in tactile images captured by a tactile sensor. While visual saliency datasets are commonly annotated by humans, manually labelling tactile images is challenging due to their counterintuitive patterns. To address this challenge, we propose a novel approach comprised of three interrelated networks: 1) a Contact Depth Network (ConDepNet), which generates a contact depth map to localize deformation in a real tactile image that contains target and noise features; 2) a Tactile Saliency Network (TacSalNet), which predicts a tactile saliency map to describe the target areas for an input contact depth map; 3) and a Tactile Noise Generator (TacNGen), which generates noise features to train the TacSalNet. Experimental results in contact pose estimation and edge-following in the presence of distractors showcase the accurate prediction of target features from real tactile images. Overall, our tactile saliency prediction approach gives robust sim-to-real tactile control in environments with unknown distractors. Project page: https://sites.google.com/view/tactile-saliency/.

* Accepted by IROS 2023 
Viaarxiv icon

Hierarchical Semi-Supervised Learning Framework for Surgical Gesture Segmentation and Recognition Based on Multi-Modality Data

Jul 31, 2023
Zhili Yuan, Jialin Lin, Dandan Zhang

Segmenting and recognizing surgical operation trajectories into distinct, meaningful gestures is a critical preliminary step in surgical workflow analysis for robot-assisted surgery. This step is necessary for facilitating learning from demonstrations for autonomous robotic surgery, evaluating surgical skills, and so on. In this work, we develop a hierarchical semi-supervised learning framework for surgical gesture segmentation using multi-modality data (i.e. kinematics and vision data). More specifically, surgical tasks are initially segmented based on distance characteristics-based profiles and variance characteristics-based profiles constructed using kinematics data. Subsequently, a Transformer-based network with a pre-trained `ResNet-18' backbone is used to extract visual features from the surgical operation videos. By combining the potential segmentation points obtained from both modalities, we can determine the final segmentation points. Furthermore, gesture recognition can be implemented based on supervised learning. The proposed approach has been evaluated using data from the publicly available JIGSAWS database, including Suturing, Needle Passing, and Knot Tying tasks. The results reveal an average F1 score of 0.623 for segmentation and an accuracy of 0.856 for recognition.

* 8 pages, 7 figures. Accepted by 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023). For more details about this paper, please visit our website: \url{https://sites.google.com/view/surseg/home} 
Viaarxiv icon

Attention of Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

Jul 26, 2023
Yijiong Lin, Mauro Comi, Alex Church, Dandan Zhang, Nathan F. Lepora

Figure 1 for Attention of Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 2 for Attention of Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 3 for Attention of Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control
Figure 4 for Attention of Robot Touch: Tactile Saliency Prediction for Robust Sim-to-Real Tactile Control

High-resolution tactile sensing can provide accurate information about local contact in contact-rich robotic tasks. However, the deployment of such tasks in unstructured environments remains under-investigated. To improve the robustness of tactile robot control in unstructured environments, we propose and study a new concept: \textit{tactile saliency} for robot touch, inspired by the human touch attention mechanism from neuroscience and the visual saliency prediction problem from computer vision. In analogy to visual saliency, this concept involves identifying key information in tactile images captured by a tactile sensor. While visual saliency datasets are commonly annotated by humans, manually labelling tactile images is challenging due to their counterintuitive patterns. To address this challenge, we propose a novel approach comprised of three interrelated networks: 1) a Contact Depth Network (ConDepNet), which generates a contact depth map to localize deformation in a real tactile image that contains target and noise features; 2) a Tactile Saliency Network (TacSalNet), which predicts a tactile saliency map to describe the target areas for an input contact depth map; 3) and a Tactile Noise Generator (TacNGen), which generates noise features to train the TacSalNet. Experimental results in contact pose estimation and edge-following in the presence of distractors showcase the accurate prediction of target features from real tactile images. Overall, our tactile saliency prediction approach gives robust sim-to-real tactile control in environments with unknown distractors. Project page: https://sites.google.com/view/tactile-saliency/.

* Accepted by IROS 2023 
Viaarxiv icon

Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing

Jul 26, 2023
Max Yang, Yijiong Lin, Alex Church, John Lloyd, Dandan Zhang, David A. W. Barton, Nathan F. Lepora

Figure 1 for Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing
Figure 2 for Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing
Figure 3 for Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing
Figure 4 for Sim-to-Real Model-Based and Model-Free Deep Reinforcement Learning for Tactile Pushing

Object pushing presents a key non-prehensile manipulation problem that is illustrative of more complex robotic manipulation tasks. While deep reinforcement learning (RL) methods have demonstrated impressive learning capabilities using visual input, a lack of tactile sensing limits their capability for fine and reliable control during manipulation. Here we propose a deep RL approach to object pushing using tactile sensing without visual input, namely tactile pushing. We present a goal-conditioned formulation that allows both model-free and model-based RL to obtain accurate policies for pushing an object to a goal. To achieve real-world performance, we adopt a sim-to-real approach. Our results demonstrate that it is possible to train on a single object and a limited sample of goals to produce precise and reliable policies that can generalize to a variety of unseen objects and pushing scenarios without domain randomization. We experiment with the trained agents in harsh pushing conditions, and show that with significantly more training samples, a model-free policy can outperform a model-based planner, generating shorter and more reliable pushing trajectories despite large disturbances. The simplicity of our training environment and effective real-world performance highlights the value of rich tactile information for fine manipulation. Code and videos are available at https://sites.google.com/view/tactile-rl-pushing/.

* Accepted by IEEE Robotics and Automation Letters (RA-L) 
Viaarxiv icon

Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning

Jul 12, 2023
Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan Zhang, Nathan F. Lepora

Figure 1 for Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning
Figure 2 for Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning
Figure 3 for Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning
Figure 4 for Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning

Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot arms with low-cost high-resolution tactile sensors (TacTips). We present a suite of bimanual manipulation tasks tailored towards tactile feedback: bi-pushing, bi-reorienting and bi-gathering. To learn effective policies, we introduce appropriate reward functions for these tasks and propose a novel goal-update mechanism with deep reinforcement learning. We also apply these policies to real-world settings with a tactile sim-to-real approach. Our analysis highlights and addresses some challenges met during the sim-to-real application, e.g. the learned policy tended to squeeze an object in the bi-reorienting task due to the sim-to-real gap. Finally, we demonstrate the generalizability and robustness of this system by experimenting with different unseen objects with applied perturbations in the real world. Code and videos are available at https://sites.google.com/view/bi-touch/.

* Accepted by IEEE Robotics and Automation Letters (RA-L) 
Viaarxiv icon

TacMMs: Tactile Mobile Manipulators for Warehouse Automation

Jun 29, 2023
Zhuochao He, Xuyang Zhang, Simon Jones, Sabine Hauert, Dandan Zhang, Nathan F. Lepora

Figure 1 for TacMMs: Tactile Mobile Manipulators for Warehouse Automation
Figure 2 for TacMMs: Tactile Mobile Manipulators for Warehouse Automation
Figure 3 for TacMMs: Tactile Mobile Manipulators for Warehouse Automation
Figure 4 for TacMMs: Tactile Mobile Manipulators for Warehouse Automation

Multi-robot platforms are playing an increasingly important role in warehouse automation for efficient goods transport. This paper proposes a novel customization of a multi-robot system, called Tactile Mobile Manipulators (TacMMs). Each TacMM integrates a soft optical tactile sensor and a mobile robot with a load-lifting mechanism, enabling cooperative transportation in tasks requiring coordinated physical interaction. More specifically, we mount the TacTip (biomimetic optical tactile sensor) on the Distributed Organisation and Transport System (DOTS) mobile robot. The tactile information then helps the mobile robots adjust the relative robot-object pose, thereby increasing the efficiency of load-lifting tasks. This study compares the performance of using two TacMMs with tactile perception with traditional vision-based pose adjustment for load-lifting. The results show that the average success rate of the TacMMs (66%) is improved over a purely visual-based method (34%), with a larger improvement when the mass of the load was non-uniformly distributed. Although this initial study considers two TacMMs, we expect the benefits of tactile perception to extend to multiple mobile robots. Website: https://sites.google.com/view/tacmms

* 8 pages, accepted in IEEE Robotics and Automation Letters, 19 June 2023 
Viaarxiv icon

TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training

Mar 07, 2023
Jialin Lin, Xiaoqing Guo, Wen Fan, Wei Li, Yuanyi Wang, Jiaming Liang, Weiru Liu, Lei Wei, Dandan Zhang

Figure 1 for TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training
Figure 2 for TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training
Figure 3 for TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training
Figure 4 for TIMS: A Tactile Internet-Based Micromanipulation System with Haptic Guidance for Surgical Training

Microsurgery involves the dexterous manipulation of delicate tissue or fragile structures such as small blood vessels, nerves, etc., under a microscope. To address the limitation of imprecise manipulation of human hands, robotic systems have been developed to assist surgeons in performing complex microsurgical tasks with greater precision and safety. However, the steep learning curve for robot-assisted microsurgery (RAMS) and the shortage of well-trained surgeons pose significant challenges to the widespread adoption of RAMS. Therefore, the development of a versatile training system for RAMS is necessary, which can bring tangible benefits to both surgeons and patients. In this paper, we present a Tactile Internet-Based Micromanipulation System (TIMS) based on a ROS-Django web-based architecture for microsurgical training. This system can provide tactile feedback to operators via a wearable tactile display (WTD), while real-time data is transmitted through the internet via a ROS-Django framework. In addition, TIMS integrates haptic guidance to `guide' the trainees to follow a desired trajectory provided by expert surgeons. Learning from demonstration based on Gaussian Process Regression (GPR) was used to generate the desired trajectory. User studies were also conducted to verify the effectiveness of our proposed TIMS, comparing users' performance with and without tactile feedback and/or haptic guidance.

* 8 pages, 7 figures. For more details of this project, please view our website: https://sites.google.com/view/viewtims/home 
Viaarxiv icon

IoHRT: An Open-Source Unified Framework Towards the Internet of Humans and Robotic Things with Cloud Computing for Home-Care Applications

Mar 07, 2023
Dandan Zhang, Jin Zheng, Jialin Lin

Figure 1 for IoHRT: An Open-Source Unified Framework Towards the Internet of Humans and Robotic Things with Cloud Computing for Home-Care Applications
Figure 2 for IoHRT: An Open-Source Unified Framework Towards the Internet of Humans and Robotic Things with Cloud Computing for Home-Care Applications
Figure 3 for IoHRT: An Open-Source Unified Framework Towards the Internet of Humans and Robotic Things with Cloud Computing for Home-Care Applications
Figure 4 for IoHRT: An Open-Source Unified Framework Towards the Internet of Humans and Robotic Things with Cloud Computing for Home-Care Applications

The accelerating aging population has led to an increasing demand for domestic robotics to ease caregivers' burden. The integration of Internet of Things (IoT), robotics, and human-robot interaction (HRI) technologies is essential for home-care applications. Although the concept of the Internet of Robotic Things (IoRT) has been utilized in various fields, most existing IoRT frameworks lack ergonomic HRI interfaces and are limited to specific tasks. This paper presents an open-source unified Internet of Humans and Robotic Things (IoHRT) framework with cloud computing, which combines personalized HRI interfaces with intelligent robotics and IoT techniques. This proposed open-source framework demonstrates characteristics of high security, compatibility, and modularity, allowing unlimited user access. Two case studies were conducted to evaluate the proposed framework's functionalities, evaluating its effectiveness in home-care scenarios. Users' feedback was collected via questionnaires, which indicates the IoHRT framework's high potential for home-care applications.

Viaarxiv icon

Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing

Mar 05, 2023
Wen Fan, Max Yang, Yifan Xing, Nathan F. Lepora, Dandan Zhang

Figure 1 for Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Figure 2 for Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Figure 3 for Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing
Figure 4 for Tac-VGNN: A Voronoi Graph Neural Network for Pose-Based Tactile Servoing

Tactile pose estimation and tactile servoing are fundamental capabilities of robot touch. Reliable and precise pose estimation can be provided by applying deep learning models to high-resolution optical tactile sensors. Given the recent successes of Graph Neural Network (GNN) and the effectiveness of Voronoi features, we developed a Tactile Voronoi Graph Neural Network (Tac-VGNN) to achieve reliable pose-based tactile servoing relying on a biomimetic optical tactile sensor (TacTip). The GNN is well suited to modeling the distribution relationship between shear motions of the tactile markers, while the Voronoi diagram supplements this with area-based tactile features related to contact depth. The experiment results showed that the Tac-VGNN model can help enhance data interpretability during graph generation and model training efficiency significantly than CNN-based methods. It also improved pose estimation accuracy along vertical depth by 28.57% over vanilla GNN without Voronoi features and achieved better performance on the real surface following tasks with smoother robot control trajectories. For more project details, please view our website: https://sites.google.com/view/tac-vgnn/home

* 7 pages, 10 figures, accepted by 2023 IEEE International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon