Alert button
Picture for Katherine Driggs-Campbell

Katherine Driggs-Campbell

Alert button

Learning Task Skills and Goals Simultaneously from Physical Interaction

Sep 08, 2023
Haonan Chen, Ye-Ji Mun, Zhe Huang, Yilong Niu, Yiqing Xie, D. Livingston McPherson, Katherine Driggs-Campbell

Figure 1 for Learning Task Skills and Goals Simultaneously from Physical Interaction

In real-world human-robot systems, it is essential for a robot to comprehend human objectives and respond accordingly while performing an extended series of motor actions. Although human objective alignment has recently emerged as a promising paradigm in the realm of physical human-robot interaction, its application is typically confined to generating simple motions due to inherent theoretical limitations. In this work, our goal is to develop a general formulation to learn manipulation functional modules and long-term task goals simultaneously from physical human-robot interaction. We show the feasibility of our framework in enabling robots to align their behaviors with the long-term task objectives inferred from human interactions.

* 2 pages, 1 figure. Accepted by CASE 2023 Special Session on The Next-Generation Resilient Cyber-Physical Manufacturing Networks 
Viaarxiv icon

In Situ Soil Property Estimation for Autonomous Earthmoving Using Physics-Infused Neural Networks

Sep 05, 2023
W. Jacob Wagner, Ahmet Soylemezoglu, Dustin Nottage, Katherine Driggs-Campbell

A novel, learning-based method for in situ estimation of soil properties using a physics-infused neural network (PINN) is presented. The network is trained to produce estimates of soil cohesion, angle of internal friction, soil-tool friction, soil failure angle, and residual depth of cut which are then passed through an earthmoving model based on the fundamental equation of earthmoving (FEE) to produce an estimated force. The network ingests a short history of kinematic observations along with past control commands and predicts interaction forces accurately with average error of less than 2kN, 13% of the measured force. To validate the approach, an earthmoving simulation of a bladed vehicle is developed using Vortex Studio, enabling comparison of the estimated parameters to pseudo-ground-truth values which is challenging in real-world experiments. The proposed approach is shown to enable accurate estimation of interaction forces and produces meaningful parameter estimates even when the model and the environmental physics deviate substantially.

* 10 pages, 6 figures, to be published in proceedings of 16th European-African Regional Conference of the International Society for Terrain-Vehicle Systems (ISTVS) 
Viaarxiv icon

Marginalized Importance Sampling for Off-Environment Policy Evaluation

Sep 04, 2023
Pulkit Katdare, Nan Jiang, Katherine Driggs-Campbell

Reinforcement Learning (RL) methods are typically sample-inefficient, making it challenging to train and deploy RL-policies in real world robots. Even a robust policy trained in simulation, requires a real-world deployment to assess their performance. This paper proposes a new approach to evaluate the real-world performance of agent policies without deploying them in the real world. The proposed approach incorporates a simulator along with real-world offline data to evaluate the performance of any policy using the framework of Marginalized Importance Sampling (MIS). Existing MIS methods face two challenges: (1) large density ratios that deviate from a reasonable range and (2) indirect supervision, where the ratio needs to be inferred indirectly, thus exacerbating estimation error. Our approach addresses these challenges by introducing the target policy's occupancy in the simulator as an intermediate variable and learning the density ratio as the product of two terms that can be learned separately. The first term is learned with direct supervision and the second term has a small magnitude, thus making it easier to run. We analyze the sample complexity as well as error propagation of our two step-procedure. Furthermore, we empirically evaluate our approach on Sim2Sim environments such as Cartpole, Reacher and Half-Cheetah. Our results show that our method generalizes well across a variety of Sim2Sim gap, target policies and offline data collection policies. We also demonstrate the performance of our algorithm on a Sim2Real task of validating the performance of a 7 DOF robotic arm using offline data along with a gazebo based arm simulator.

Viaarxiv icon

PeRP: Personalized Residual Policies For Congestion Mitigation Through Co-operative Advisory Systems

Aug 15, 2023
Aamir Hasan, Neeloy Chakraborty, Haonan Chen, Jung-Hoon Cho, Cathy Wu, Katherine Driggs-Campbell

Figure 1 for PeRP: Personalized Residual Policies For Congestion Mitigation Through Co-operative Advisory Systems
Figure 2 for PeRP: Personalized Residual Policies For Congestion Mitigation Through Co-operative Advisory Systems
Figure 3 for PeRP: Personalized Residual Policies For Congestion Mitigation Through Co-operative Advisory Systems
Figure 4 for PeRP: Personalized Residual Policies For Congestion Mitigation Through Co-operative Advisory Systems

Intelligent driving systems can be used to mitigate congestion through simple actions, thus improving many socioeconomic factors such as commute time and gas costs. However, these systems assume precise control over autonomous vehicle fleets, and are hence limited in practice as they fail to account for uncertainty in human behavior. Piecewise Constant (PC) Policies address these issues by structurally modeling the likeness of human driving to reduce traffic congestion in dense scenarios to provide action advice to be followed by human drivers. However, PC policies assume that all drivers behave similarly. To this end, we develop a co-operative advisory system based on PC policies with a novel driver trait conditioned Personalized Residual Policy, PeRP. PeRP advises drivers to behave in ways that mitigate traffic congestion. We first infer the driver's intrinsic traits on how they follow instructions in an unsupervised manner with a variational autoencoder. Then, a policy conditioned on the inferred trait adapts the action of the PC policy to provide the driver with a personalized recommendation. Our system is trained in simulation with novel driver modeling of instruction adherence. We show that our approach successfully mitigates congestion while adapting to different driver behaviors, with 4 to 22% improvement in average speed over baselines.

* Accepted to ITSC 2023. Additional material and code is available at the project webpage: https://sites.google.com/illinois.edu/perp 
Viaarxiv icon

Towards Safe Multi-Level Human-Robot Interaction in Industrial Tasks

Aug 06, 2023
Zhe Huang, Ye-Ji Mun, Haonan Chen, Yiqing Xie, Yilong Niu, Xiang Li, Ninghan Zhong, Haoyuan You, D. Livingston McPherson, Katherine Driggs-Campbell

Figure 1 for Towards Safe Multi-Level Human-Robot Interaction in Industrial Tasks

Multiple levels of safety measures are required by multiple interaction modes which collaborative robots need to perform industrial tasks with human co-workers. We develop three independent modules to account for safety in different types of human-robot interaction: vision-based safety monitoring pauses robot when human is present in a shared space; contact-based safety monitoring pauses robot when unexpected contact happens between human and robot; hierarchical intention tracking keeps robot in a safe distance from human when human and robot work independently, and switches robot to compliant mode when human intends to guide robot. We discuss the prospect of future research in development and integration of multi-level safety modules. We focus on how to provide safety guarantees for collaborative robot solutions with human behavior modeling.

* 2 pages, 1 figure. Accepted by CASE 2023 Special Session on The Next-Generation Resilient Cyber-Physical Manufacturing Networks 
Viaarxiv icon

DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding

Jul 13, 2023
Shuijing Liu, Aamir Hasan, Kaiwen Hong, Runxuan Wang, Peixin Chang, Zachary Mizrachi, Justin Lin, D. Livingston McPherson, Wendy A. Rogers, Katherine Driggs-Campbell

Figure 1 for DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding
Figure 2 for DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding
Figure 3 for DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding
Figure 4 for DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding

Persons with visual impairments (PwVI) have difficulties understanding and navigating spaces around them. Current wayfinding technologies either focus solely on navigation or provide limited communication about the environment. Motivated by recent advances in visual-language grounding and semantic navigation, we propose DRAGON, a guiding robot powered by a dialogue system and the ability to associate the environment with natural language. By understanding the commands from the user, DRAGON is able to guide the user to the desired landmarks on the map, describe the environment, and answer questions from visual observations. Through effective utilization of dialogue, the robot can ground the user's free-form descriptions to landmarks in the environment, and give the user semantic information through spoken language. We conduct a user study with blindfolded participants in an everyday indoor environment. Our results demonstrate that DRAGON is able to communicate with the user smoothly, provide a good guiding experience, and connect users with their surrounding environment in an intuitive manner.

* Webpage and videos are at https://sites.google.com/view/dragon-wayfinding/home 
Viaarxiv icon

User-Friendly Safety Monitoring System for Manufacturing Cobots

Jul 04, 2023
Ye-Ji Mun, Zhe Huang, Haonan Chen, Yilong Niu, Haoyuan You, D. Livingston McPherson, Katherine Driggs-Campbell

Figure 1 for User-Friendly Safety Monitoring System for Manufacturing Cobots
Figure 2 for User-Friendly Safety Monitoring System for Manufacturing Cobots

Collaborative robots are being increasingly utilized in industrial production lines due to their efficiency and accuracy. However, the close proximity between humans and robots can pose safety risks due to the robot's high-speed movements and powerful forces. To address this, we developed a vision-based safety monitoring system that creates a 3D reconstruction of the collaborative scene. Our system records the human-robot interaction data in real-time and reproduce their virtual replicas in a simulator for offline analysis. The objective is to provide workers with a user-friendly visualization tool for reviewing performance and diagnosing failures, thereby enhancing safety in manufacturing settings.

* 2 pages, 2 figures 
Viaarxiv icon

Dynamic-Resolution Model Learning for Object Pile Manipulation

Jun 30, 2023
Yixuan Wang, Yunzhu Li, Katherine Driggs-Campbell, Li Fei-Fei, Jiajun Wu

Figure 1 for Dynamic-Resolution Model Learning for Object Pile Manipulation
Figure 2 for Dynamic-Resolution Model Learning for Object Pile Manipulation
Figure 3 for Dynamic-Resolution Model Learning for Object Pile Manipulation
Figure 4 for Dynamic-Resolution Model Learning for Object Pile Manipulation

Dynamics models learned from visual observations have shown to be effective in various robotic manipulation tasks. One of the key questions for learning such dynamics models is what scene representation to use. Prior works typically assume representation at a fixed dimension or resolution, which may be inefficient for simple tasks and ineffective for more complicated tasks. In this work, we investigate how to learn dynamic and adaptive representations at different levels of abstraction to achieve the optimal trade-off between efficiency and effectiveness. Specifically, we construct dynamic-resolution particle representations of the environment and learn a unified dynamics model using graph neural networks (GNNs) that allows continuous selection of the abstraction level. During test time, the agent can adaptively determine the optimal resolution at each model-predictive control (MPC) step. We evaluate our method in object pile manipulation, a task we commonly encounter in cooking, agriculture, manufacturing, and pharmaceutical applications. Through comprehensive evaluations both in the simulation and the real world, we show that our method achieves significantly better performance than state-of-the-art fixed-resolution baselines at the gathering, sorting, and redistribution of granular object piles made with various instances like coffee beans, almonds, corn, etc.

* Accepted to Robotics: Science and Systems (RSS) 2023. The first two authors contributed equally. Project Page: https://robopil.github.io/dyn-res-pile-manip 
Viaarxiv icon