Alert button
Picture for Wenzhen Yuan

Wenzhen Yuan

Alert button

Kitchen Artist: Precise Control of Liquid Dispensing for Gourmet Plating

Nov 20, 2023
Hung-Jui Huang, Jingyi Xiang, Wenzhen Yuan

Manipulating liquid is widely required for many tasks, especially in cooking. A common way to address this is extruding viscous liquid from a squeeze bottle. In this work, our goal is to create a sauce plating robot, which requires precise control of the thickness of squeezed liquids on a surface. Different liquids demand different manipulation policies. We command the robot to tilt the container and monitor the liquid response using a force sensor to identify liquid properties. Based on the liquid properties, we predict the liquid behavior with fixed squeezing motions in a data-driven way and calculate the required drawing speed for the desired stroke size. This open-loop system works effectively even without sensor feedback. Our experiments demonstrate accurate stroke size control across different liquids and fill levels. We show that understanding liquid properties can facilitate effective liquid manipulation. More importantly, our dish garnishing robot has a wide range of applications and holds significant commercialization potential.

* Submitted to ICRA 2024 
Viaarxiv icon

Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components

Sep 08, 2023
Arpit Agarwal, Abhiroop Ajith, Chengtao Wen, Veniamin Stryzheus, Brian Miller, Matthew Chen, Micah K. Johnson, Jose Luis Susa Rincon, Justinian Rosca, Wenzhen Yuan

Figure 1 for Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
Figure 2 for Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
Figure 3 for Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components
Figure 4 for Robotic Defect Inspection with Visual and Tactile Perception for Large-scale Components

In manufacturing processes, surface inspection is a key requirement for quality assessment and damage localization. Due to this, automated surface anomaly detection has become a promising area of research in various industrial inspection systems. A particular challenge in industries with large-scale components, like aircraft and heavy machinery, is inspecting large parts with very small defect dimensions. Moreover, these parts can be of curved shapes. To address this challenge, we present a 2-stage multi-modal inspection pipeline with visual and tactile sensing. Our approach combines the best of both visual and tactile sensing by identifying and localizing defects using a global view (vision) and using the localized area for tactile scanning for identifying remaining defects. To benchmark our approach, we propose a novel real-world dataset with multiple metallic defect types per image, collected in the production environments on real aerospace manufacturing parts, as well as online robot experiments in two environments. Our approach is able to identify 85% defects using Stage I and identify 100% defects after Stage II. The dataset is publicly available at https://zenodo.org/record/8327713

* This is a pre-print for International Conference on Intelligent Robots and Systems 2023 publication 
Viaarxiv icon

Customizing Textile and Tactile Skins for Interactive Industrial Robots

Aug 06, 2023
Bo Ying Su, Zhongqi Wei, James McCann, Wenzhen Yuan, Changliu Liu

Figure 1 for Customizing Textile and Tactile Skins for Interactive Industrial Robots
Figure 2 for Customizing Textile and Tactile Skins for Interactive Industrial Robots
Figure 3 for Customizing Textile and Tactile Skins for Interactive Industrial Robots
Figure 4 for Customizing Textile and Tactile Skins for Interactive Industrial Robots

Tactile skins made from textiles enhance robot-human interaction by localizing contact points and measuring contact forces. This paper presents a solution for rapidly fabricating, calibrating, and deploying these skins on industrial robot arms. The novel automated skin calibration procedure maps skin locations to robot geometry and calibrates contact force. Through experiments on a FANUC LR Mate 200id/7L industrial robot, we demonstrate that tactile skins made from textiles can be effectively used for human-robot interaction in industrial environments, and can provide unique opportunities in robot control and learning, making them a promising technology for enhancing robot perception and interaction.

Viaarxiv icon

Estimating Properties of Solid Particles Inside Container Using Touch Sensing

Jul 28, 2023
Xiaofeng Guo, Hung-Jui Huang, Wenzhen Yuan

Figure 1 for Estimating Properties of Solid Particles Inside Container Using Touch Sensing
Figure 2 for Estimating Properties of Solid Particles Inside Container Using Touch Sensing
Figure 3 for Estimating Properties of Solid Particles Inside Container Using Touch Sensing
Figure 4 for Estimating Properties of Solid Particles Inside Container Using Touch Sensing

Solid particles, such as rice and coffee beans, are commonly stored in containers and are ubiquitous in our daily lives. Understanding those particles' properties could help us make later decisions or perform later manipulation tasks such as pouring. Humans typically interact with the containers to get an understanding of the particles inside them, but it is still a challenge for robots to achieve that. This work utilizes tactile sensing to estimate multiple properties of solid particles enclosed in the container, specifically, content mass, content volume, particle size, and particle shape. We design a sequence of robot actions to interact with the container. Based on physical understanding, we extract static force/torque value from the F/T sensor, vibration-related features and topple-related features from the newly designed high-speed GelSight tactile sensor to estimate those four particle properties. We test our method on $37$ very different daily particles, including powder, rice, beans, tablets, etc. Experiments show that our approach is able to estimate content mass with an error of $1.8$ g, content volume with an error of $6.1$ ml, particle size with an error of $1.1$ mm, and achieves an accuracy of $75.6$% for particle shape estimation. In addition, our method can generalize to unseen particles with unknown volumes. By estimating these particle properties, our method can help robots to better perceive the granular media and help with different manipulation tasks in daily life and industry.

* 8 pages, 14 figures 
Viaarxiv icon

Controllable Visual-Tactile Synthesis

May 04, 2023
Ruihan Gao, Wenzhen Yuan, Jun-Yan Zhu

Figure 1 for Controllable Visual-Tactile Synthesis
Figure 2 for Controllable Visual-Tactile Synthesis
Figure 3 for Controllable Visual-Tactile Synthesis
Figure 4 for Controllable Visual-Tactile Synthesis

Deep generative models have various content creation applications such as graphic design, e-commerce, and virtual Try-on. However, current works mainly focus on synthesizing realistic visual outputs, often ignoring other sensory modalities, such as touch, which limits physical interaction with users. In this work, we leverage deep generative models to create a multi-sensory experience where users can touch and see the synthesized object when sliding their fingers on a haptic surface. The main challenges lie in the significant scale discrepancy between vision and touch sensing and the lack of explicit mapping from touch sensing data to a haptic rendering device. To bridge this gap, we collect high-resolution tactile data with a GelSight sensor and create a new visuotactile clothing dataset. We then develop a conditional generative model that synthesizes both visual and tactile outputs from a single sketch. We evaluate our method regarding image quality and tactile rendering accuracy. Finally, we introduce a pipeline to render high-quality visual and tactile outputs on an electroadhesion-based haptic device for an immersive experience, allowing for challenging materials and editable sketch inputs.

* Project website: https://visual-tactile-synthesis.github.io/ Code: https://github.com/RuihanGao/visual-tactile-synthesis 
Viaarxiv icon

Cable Routing and Assembly using Tactile-driven Motion Primitives

Mar 21, 2023
Achu Wilson, Helen Jiang, Wenzhao Lian, Wenzhen Yuan

Figure 1 for Cable Routing and Assembly using Tactile-driven Motion Primitives
Figure 2 for Cable Routing and Assembly using Tactile-driven Motion Primitives
Figure 3 for Cable Routing and Assembly using Tactile-driven Motion Primitives
Figure 4 for Cable Routing and Assembly using Tactile-driven Motion Primitives

Manipulating cables is challenging for robots because of the infinite degrees of freedom of the cables and frequent occlusion by the gripper and the environment. These challenges are further complicated by the dexterous nature of the operations required for cable routing and assembly, such as weaving and inserting, hampering common solutions with vision-only sensing. In this paper, we propose to integrate tactile-guided low-level motion control with high-level vision-based task parsing for a challenging task: cable routing and assembly on a reconfigurable task board. Specifically, we build a library of tactile-guided motion primitives using a fingertip GelSight sensor, where each primitive reliably accomplishes an operation such as cable following and weaving. The overall task is inferred via visual perception given a goal configuration image, and then used to generate the primitive sequence. Experiments demonstrate the effectiveness of individual tactile-guided primitives and the integrated end-to-end solution, significantly outperforming the method without tactile sensing. Our reconfigurable task setup and proposed baselines provide a benchmark for future research in cable manipulation. More details and video are presented in \url{https://helennn.github.io/cable-manip/}

Viaarxiv icon

Toward Zero-Shot Sim-to-Real Transfer Learning for Pneumatic Soft Robot 3D Proprioceptive Sensing

Mar 08, 2023
Uksang Yoo, Hanwen Zhao, Alvaro Altamirano, Wenzhen Yuan, Chen Feng

Figure 1 for Toward Zero-Shot Sim-to-Real Transfer Learning for Pneumatic Soft Robot 3D Proprioceptive Sensing
Figure 2 for Toward Zero-Shot Sim-to-Real Transfer Learning for Pneumatic Soft Robot 3D Proprioceptive Sensing
Figure 3 for Toward Zero-Shot Sim-to-Real Transfer Learning for Pneumatic Soft Robot 3D Proprioceptive Sensing
Figure 4 for Toward Zero-Shot Sim-to-Real Transfer Learning for Pneumatic Soft Robot 3D Proprioceptive Sensing

Pneumatic soft robots present many advantages in manipulation tasks. Notably, their inherent compliance makes them safe and reliable in unstructured and fragile environments. However, full-body shape sensing for pneumatic soft robots is challenging because of their high degrees of freedom and complex deformation behaviors. Vision-based proprioception sensing methods relying on embedded cameras and deep learning provide a good solution to proprioception sensing by extracting the full-body shape information from the high-dimensional sensing data. But the current training data collection process makes it difficult for many applications. To address this challenge, we propose and demonstrate a robust sim-to-real pipeline that allows the collection of the soft robot's shape information in high-fidelity point cloud representation. The model trained on simulated data was evaluated with real internal camera images. The results show that the model performed with averaged Chamfer distance of 8.85 mm and tip position error of 10.12 mm even with external perturbation for a pneumatic soft robot with a length of 100.0 mm. We also demonstrated the sim-to-real pipeline's potential for exploring different configurations of visual patterns to improve vision-based reconstruction results. The code and dataset are available at https://github.com/DeepSoRo/DeepSoRoSim2Real.

* 2023 International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon

RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots

Mar 06, 2023
Zilin Si, Tianhong Catherine Yu, Katrene Morozov, James McCann, Wenzhen Yuan

Figure 1 for RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots
Figure 2 for RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots
Figure 3 for RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots
Figure 4 for RobotSweater: Scalable, Generalizable, and Customizable Machine-Knitted Tactile Skins for Robots

Tactile sensing is essential for robots to perceive and react to the environment. However, it remains a challenge to make large-scale and flexible tactile skins on robots. Industrial machine knitting provides solutions to manufacture customizable fabrics. Along with functional yarns, it can produce highly customizable circuits that can be made into tactile skins for robots. In this work, we present RobotSweater, a machine-knitted pressure-sensitive tactile skin that can be easily applied on robots. We design and fabricate a parameterized multi-layer tactile skin using off-the-shelf yarns, and characterize our sensor on both a flat testbed and a curved surface to show its robust contact detection, multi-contact localization, and pressure sensing capabilities. The sensor is fabricated using a well-established textile manufacturing process with a programmable industrial knitting machine, which makes it highly customizable and low-cost. The textile nature of the sensor also makes it easily fit curved surfaces of different robots and have a friendly appearance. Using our tactile skins, we conduct closed-loop control with tactile feedback for two applications: (1) human lead-through control of a robot arm, and (2) human-robot interaction with a mobile robot.

Viaarxiv icon

Touch and Go: Learning from Human-Collected Vision and Touch

Nov 29, 2022
Fengyu Yang, Chenyang Ma, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens

Figure 1 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 2 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 3 for Touch and Go: Learning from Human-Collected Vision and Touch
Figure 4 for Touch and Go: Learning from Human-Collected Vision and Touch

The ability to associate touch with sight is essential for tasks that require physically interacting with objects in the world. We propose a dataset with paired visual and tactile data called Touch and Go, in which human data collectors probe objects in natural environments using tactile sensors, while simultaneously recording egocentric video. In contrast to previous efforts, which have largely been confined to lab settings or simulated environments, our dataset spans a large number of "in the wild" objects and scenes. To demonstrate our dataset's effectiveness, we successfully apply it to a variety of tasks: 1) self-supervised visuo-tactile feature learning, 2) tactile-driven image stylization, i.e., making the visual appearance of an object more consistent with a given tactile signal, and 3) predicting future frames of a tactile signal from visuo-tactile inputs.

* Accepted by NeurIPS 2022 Track of Datasets and Benchmarks 
Viaarxiv icon

PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

Sep 12, 2022
Shubham Kanitkar, Helen Jiang, Wenzhen Yuan

Figure 1 for PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis
Figure 2 for PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis
Figure 3 for PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis
Figure 4 for PoseIt: A Visual-Tactile Dataset of Holding Poses for Grasp Stability Analysis

When humans grasp objects in the real world, we often move our arms to hold the object in a different pose where we can use it. In contrast, typical lab settings only study the stability of the grasp immediately after lifting, without any subsequent re-positioning of the arm. However, the grasp stability could vary widely based on the object's holding pose, as the gravitational torque and gripper contact forces could change completely. To facilitate the study of how holding poses affect grasp stability, we present PoseIt, a novel multi-modal dataset that contains visual and tactile data collected from a full cycle of grasping an object, re-positioning the arm to one of the sampled poses, and shaking the object. Using data from PoseIt, we can formulate and tackle the task of predicting whether a grasped object is stable in a particular held pose. We train an LSTM classifier that achieves 85% accuracy on the proposed task. Our experimental results show that multi-modal models trained on PoseIt achieve higher accuracy than using solely vision or tactile data and that our classifiers can also generalize to unseen objects and poses.

* 8 pages, 7 figures, IEEE/RSJ International Conference on Intelligent Robots and Systems 2022 
Viaarxiv icon