Alert button
Picture for Tianchen Ji

Tianchen Ji

Alert button

Learning Rewards and Skills to Follow Commands with A Data Efficient Visual-Audio Representation

Jan 23, 2023
Peixin Chang, Shuijing Liu, Tianchen Ji, Neeloy Chakraborty, D. Livingston McPherson, Katherine Driggs-Campbell

Figure 1 for Learning Rewards and Skills to Follow Commands with A Data Efficient Visual-Audio Representation
Figure 2 for Learning Rewards and Skills to Follow Commands with A Data Efficient Visual-Audio Representation
Figure 3 for Learning Rewards and Skills to Follow Commands with A Data Efficient Visual-Audio Representation
Figure 4 for Learning Rewards and Skills to Follow Commands with A Data Efficient Visual-Audio Representation

Based on the recent advancements in representation learning, we propose a novel framework for command-following robots with raw sensor inputs. Previous RL-based methods are either difficult to continuously improve after the deployment or require a large number of new labels during the fine-tuning. Motivated by (self-)supervised contrastive learning literature, we propose a novel representation, named VAR++, that generates an intrinsic reward function for command-following robot tasks by associating images with sound commands. After the robot is deployed in a new domain, the representation can be updated intuitively and data-efficiently by non-experts, and the robot is able to fulfill sound commands without any hand-crafted reward functions. We demonstrate our approach on various sound types and robotic tasks, including navigation and manipulation with raw sensor inputs. In the simulated experiments, we show that our system can continually self-improve in previously unseen scenarios given fewer new labeled data, yet achieves better performance, compared with previous methods.

Viaarxiv icon

Structural Attention-Based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection

Jan 09, 2023
Neeloy Chakraborty, Aamir Hasan, Shuijing Liu, Tianchen Ji, Weihang Liang, D. Livingston McPherson, Katherine Driggs-Campbell

Figure 1 for Structural Attention-Based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection
Figure 2 for Structural Attention-Based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection
Figure 3 for Structural Attention-Based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection
Figure 4 for Structural Attention-Based Recurrent Variational Autoencoder for Highway Vehicle Anomaly Detection

In autonomous driving, detection of abnormal driving behaviors is essential to ensure the safety of vehicle controllers. Prior works in vehicle anomaly detection have shown that modeling interactions between agents improves detection accuracy, but certain abnormal behaviors where structured road information is paramount are poorly identified, such as wrong-way and off-road driving. We propose a novel unsupervised framework for highway anomaly detection named Structural Attention-based Recurrent VAE (SABeR-VAE), which explicitly uses the structure of the environment to aid anomaly identification. Specifically, we use a vehicle self-attention module to learn the relations among vehicles on a road, and a separate lane-vehicle attention module to model the importance of permissible lanes to aid in trajectory prediction. Conditioned on the attention modules' outputs, a recurrent encoder-decoder architecture with a stochastic Koopman operator-propagated latent space predicts the next states of vehicles. Our model is trained end-to-end to minimize prediction loss on normal vehicle behaviors, and is deployed to detect anomalies in (ab)normal scenarios. By combining the heterogeneous vehicle and lane information, SABeR-VAE and its deterministic variant, SABeR-AE, improve abnormal AUPR by 18% and 25% respectively on the simulated MAAD highway dataset. Furthermore, we show that the learned Koopman operator in SABeR-VAE enforces interpretable structure in the variational latent space. The results of our method indeed show that modeling environmental factors is essential to detecting a diverse set of anomalies in deployment.

* Published as a full paper in IFAAMAS International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023 
Viaarxiv icon

Examining Audio Communication Mechanisms for Supervising Fleets of Agricultural Robots

Aug 22, 2022
Abhi Kamboj, Tianchen Ji, Katie Driggs-Campbell

Figure 1 for Examining Audio Communication Mechanisms for Supervising Fleets of Agricultural Robots
Figure 2 for Examining Audio Communication Mechanisms for Supervising Fleets of Agricultural Robots
Figure 3 for Examining Audio Communication Mechanisms for Supervising Fleets of Agricultural Robots
Figure 4 for Examining Audio Communication Mechanisms for Supervising Fleets of Agricultural Robots

Agriculture is facing a labor crisis, leading to increased interest in fleets of small, under-canopy robots (agbots) that can perform precise, targeted actions (e.g., crop scouting, weeding, fertilization), while being supervised by human operators remotely. However, farmers are not necessarily experts in robotics technology and will not adopt technologies that add to their workload or do not provide an immediate payoff. In this work, we explore methods for communication between a remote human operator and multiple agbots and examine the impact of audio communication on the operator's preferences and productivity. We develop a simulation platform where agbots are deployed across a field, randomly encounter failures, and call for help from the operator. As the agbots report errors, various audio communication mechanisms are tested to convey which robot failed and what type of failure occurs. The human is tasked with verbally diagnosing the failure while completing a secondary task. A user study was conducted to test three audio communication methods: earcons, single-phrase commands, and full sentence communication. Each participant completed a survey to determine their preferences and each method's overall effectiveness. Our results suggest that the system using single phrases is the most positively perceived by participants and may allow for the human to complete the secondary task more efficiently. The code is available at: https://github.com/akamboj2/Agbot-Sim.

* Camera ready version for IEEE RO-MAN 2022 
Viaarxiv icon

Traversing Supervisor Problem: An Approximately Optimal Approach to Multi-Robot Assistance

May 03, 2022
Tianchen Ji, Roy Dong, Katherine Driggs-Campbell

Figure 1 for Traversing Supervisor Problem: An Approximately Optimal Approach to Multi-Robot Assistance
Figure 2 for Traversing Supervisor Problem: An Approximately Optimal Approach to Multi-Robot Assistance
Figure 3 for Traversing Supervisor Problem: An Approximately Optimal Approach to Multi-Robot Assistance
Figure 4 for Traversing Supervisor Problem: An Approximately Optimal Approach to Multi-Robot Assistance

The number of multi-robot systems deployed in field applications has increased dramatically over the years. Despite the recent advancement of navigation algorithms, autonomous robots often encounter challenging situations where the control policy fails and the human assistance is required to resume robot tasks. Human-robot collaboration can help achieve high-levels of autonomy, but monitoring and managing multiple robots at once by a single human supervisor remains a challenging problem. Our goal is to help a supervisor decide which robots to assist in which order such that the team performance can be maximized. We formulate the one-to-many supervision problem in uncertain environments as a dynamic graph traversal problem. An approximation algorithm based on the profitable tour problem on a static graph is developed to solve the original problem, and the approximation error is bounded and analyzed. Our case study on a simulated autonomous farm demonstrates superior team performance than baseline methods in task completion time and human working time, and that our method can be deployed in real-time for robot fleets with moderate size.

* RSS 2022 Camera Ready Version 
Viaarxiv icon

Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion

Apr 03, 2022
Tianchen Ji, Arun Narenthiran Sivakumar, Girish Chowdhary, Katherine Driggs-Campbell

Figure 1 for Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion
Figure 2 for Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion
Figure 3 for Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion
Figure 4 for Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion

Despite the rapid advancement of navigation algorithms, mobile robots often produce anomalous behaviors that can lead to navigation failures. The ability to detect such anomalous behaviors is a key component in modern robots to achieve high-levels of autonomy. Reactive anomaly detection methods identify anomalous task executions based on the current robot state and thus lack the ability to alert the robot before an actual failure occurs. Such an alert delay is undesirable due to the potential damage to both the robot and the surrounding objects. We propose a proactive anomaly detection network (PAAD) for robot navigation in unstructured and uncertain environments. PAAD predicts the probability of future failure based on the planned motions from the predictive controller and the current observation from the perception module. Multi-sensor signals are fused effectively to provide robust anomaly detection in the presence of sensor occlusion as seen in field environments. Our experiments on field robot data demonstrates superior failure identification performance than previous methods, and that our model can capture anomalous behaviors in real-time while maintaining a low false detection rate in cluttered fields. Code, dataset, and video are available at https://github.com/tianchenji/PAAD

* Accepted by RA-L with ICRA 2022 option 
Viaarxiv icon

Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments

Dec 15, 2020
Tianchen Ji, Sri Theja Vuppala, Girish Chowdhary, Katherine Driggs-Campbell

Figure 1 for Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Figure 2 for Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Figure 3 for Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments
Figure 4 for Multi-Modal Anomaly Detection for Unstructured and Uncertain Environments

To achieve high-levels of autonomy, modern robots require the ability to detect and recover from anomalies and failures with minimal human supervision. Multi-modal sensor signals could provide more information for such anomaly detection tasks; however, the fusion of high-dimensional and heterogeneous sensor modalities remains a challenging problem. We propose a deep learning neural network: supervised variational autoencoder (SVAE), for failure identification in unstructured and uncertain environments. Our model leverages the representational power of VAE to extract robust features from high-dimensional inputs for supervised learning tasks. The training objective unifies the generative model and the discriminative model, thus making the learning a one-stage procedure. Our experiments on real field robot data demonstrate superior failure identification performance than baseline methods, and that our model learns interpretable representations. Videos of our results are available on our website: https://sites.google.com/illinois.edu/supervised-vae .

* Conference on Robot Learning (CoRL) 2020 
Viaarxiv icon