Alert button
Picture for Haruki Nishimura

Haruki Nishimura

Alert button

Residual Q-Learning: Offline and Online Policy Customization without Value

Jun 15, 2023
Chenran Li, Chen Tang, Haruki Nishimura, Jean Mercat, Masayoshi Tomizuka, Wei Zhan

Imitation Learning (IL) is a widely used framework for learning imitative behavior from demonstrations. It is especially appealing for solving complex real-world tasks where handcrafting reward function is difficult, or when the goal is to mimic human expert behavior. However, the learned imitative policy can only follow the behavior in the demonstration. When applying the imitative policy, we may need to customize the policy behavior to meet different requirements coming from diverse downstream tasks. Meanwhile, we still want the customized policy to maintain its imitative nature. To this end, we formulate a new problem setting called policy customization. It defines the learning task as training a policy that inherits the characteristics of the prior policy while satisfying some additional requirements imposed by a target downstream task. We propose a novel and principled approach to interpret and determine the trade-off between the two task objectives. Specifically, we formulate the customization problem as a Markov Decision Process (MDP) with a reward function that combines 1) the inherent reward of the demonstration; and 2) the add-on reward specified by the downstream task. We propose a novel framework, Residual Q-learning, which can solve the formulated MDP by leveraging the prior policy without knowing the inherent reward or value function of the prior policy. We derive a family of residual Q-learning algorithms that can realize offline and online policy customization, and show that the proposed algorithms can effectively accomplish policy customization tasks in various environments.

* The first two authors contributed equally 
Viaarxiv icon

In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States

Jan 27, 2023
Fernando Castañeda, Haruki Nishimura, Rowan McAllister, Koushil Sreenath, Adrien Gaidon

Figure 1 for In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States
Figure 2 for In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States
Figure 3 for In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States
Figure 4 for In-Distribution Barrier Functions: Self-Supervised Policy Filters that Avoid Out-of-Distribution States

Learning-based control approaches have shown great promise in performing complex tasks directly from high-dimensional perception data for real robotic systems. Nonetheless, the learned controllers can behave unexpectedly if the trajectories of the system divert from the training data distribution, which can compromise safety. In this work, we propose a control filter that wraps any reference policy and effectively encourages the system to stay in-distribution with respect to offline-collected safe demonstrations. Our methodology is inspired by Control Barrier Functions (CBFs), which are model-based tools from the nonlinear control literature that can be used to construct minimally invasive safe policy filters. While existing methods based on CBFs require a known low-dimensional state representation, our proposed approach is directly applicable to systems that rely solely on high-dimensional visual observations by learning in a latent state-space. We demonstrate that our method is effective for two different visuomotor control tasks in simulation environments, including both top-down and egocentric view settings.

Viaarxiv icon

RAP: Risk-Aware Prediction for Robust Planning

Oct 04, 2022
Haruki Nishimura, Jean Mercat, Blake Wulfe, Rowan McAllister, Adrien Gaidon

Figure 1 for RAP: Risk-Aware Prediction for Robust Planning
Figure 2 for RAP: Risk-Aware Prediction for Robust Planning
Figure 3 for RAP: Risk-Aware Prediction for Robust Planning
Figure 4 for RAP: Risk-Aware Prediction for Robust Planning

Robust planning in interactive scenarios requires predicting the uncertain future to make risk-aware decisions. Unfortunately, due to long-tail safety-critical events, the risk is often under-estimated by finite-sampling approximations of probabilistic motion forecasts. This can lead to overconfident and unsafe robot behavior, even with robust planners. Instead of assuming full prediction coverage that robust planners require, we propose to make prediction itself risk-aware. We introduce a new prediction objective to learn a risk-biased distribution over trajectories, so that risk evaluation simplifies to an expected cost estimation under this biased distribution. This reduces the sample complexity of the risk estimation during online planning, which is needed for safe real-time performance. Evaluation results in a didactic simulation environment and on a real-world dataset demonstrate the effectiveness of our approach.

* 23 pages, 14 figures, 3 tables. First two authors contributed equally. Conference on Robot Learning (CoRL) 2022 (oral) 
Viaarxiv icon

RAT iLQR: A Risk Auto-Tuning Controller to Optimally Account for Stochastic Model Mismatch

Oct 16, 2020
Haruki Nishimura, Negar Mehr, Adrien Gaidon, Mac Schwager

Figure 1 for RAT iLQR: A Risk Auto-Tuning Controller to Optimally Account for Stochastic Model Mismatch
Figure 2 for RAT iLQR: A Risk Auto-Tuning Controller to Optimally Account for Stochastic Model Mismatch
Figure 3 for RAT iLQR: A Risk Auto-Tuning Controller to Optimally Account for Stochastic Model Mismatch
Figure 4 for RAT iLQR: A Risk Auto-Tuning Controller to Optimally Account for Stochastic Model Mismatch

Successful robotic operation in stochastic environments relies on accurate characterization of the underlying probability distributions, yet this is often imperfect due to limited knowledge. This work presents a control algorithm that is capable of handling such distributional mismatches. Specifically, we propose a novel nonlinear MPC for distributionally robust control, which plans locally optimal feedback policies against a worst-case distribution within a given KL divergence bound from a Gaussian distribution. Leveraging mathematical equivalence between distributionally robust control and risk-sensitive optimal control, our framework also provides an algorithm to dynamically adjust the risk-sensitivity level online for risk-sensitive control. The benefits of the distributional robustness as well as the automatic risk-sensitivity adjustment are demonstrated in a dynamic collision avoidance scenario where the predictive distribution of human motion is erroneous.

* Submitted to IEEE Robotics and Automation Letters with ICRA 2021 Option 
Viaarxiv icon

Risk-Sensitive Sequential Action Control with Multi-Modal Human Trajectory Forecasting for Safe Crowd-Robot Interaction

Sep 12, 2020
Haruki Nishimura, Boris Ivanovic, Adrien Gaidon, Marco Pavone, Mac Schwager

Figure 1 for Risk-Sensitive Sequential Action Control with Multi-Modal Human Trajectory Forecasting for Safe Crowd-Robot Interaction
Figure 2 for Risk-Sensitive Sequential Action Control with Multi-Modal Human Trajectory Forecasting for Safe Crowd-Robot Interaction
Figure 3 for Risk-Sensitive Sequential Action Control with Multi-Modal Human Trajectory Forecasting for Safe Crowd-Robot Interaction
Figure 4 for Risk-Sensitive Sequential Action Control with Multi-Modal Human Trajectory Forecasting for Safe Crowd-Robot Interaction

This paper presents a novel online framework for safe crowd-robot interaction based on risk-sensitive stochastic optimal control, wherein the risk is modeled by the entropic risk measure. The sampling-based model predictive control relies on mode insertion gradient optimization for this risk measure as well as Trajectron++, a state-of-the-art generative model that produces multimodal probabilistic trajectory forecasts for multiple interacting agents. Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control, which is advantageous compared to end-to-end policy learning methods in that it allows the robot's desired behavior to be specified at run time. In particular, we show that the robot exhibits diverse interaction behavior by varying the risk sensitivity parameter. A simulation study and a real-world experiment show that the proposed online framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.

* To appear in 2020 IEEE/RSJ IROS 
Viaarxiv icon

SACBP: Belief Space Planning for Continuous-Time Dynamical Systems via Stochastic Sequential Action Control

Feb 26, 2020
Haruki Nishimura, Mac Schwager

Figure 1 for SACBP: Belief Space Planning for Continuous-Time Dynamical Systems via Stochastic Sequential Action Control
Figure 2 for SACBP: Belief Space Planning for Continuous-Time Dynamical Systems via Stochastic Sequential Action Control
Figure 3 for SACBP: Belief Space Planning for Continuous-Time Dynamical Systems via Stochastic Sequential Action Control

We propose a novel belief space planning technique for continuous dynamics by viewing the belief system as a hybrid dynamical system with time-driven switching. Our approach is based on the perturbation theory of differential equations and extends Sequential Action Control to stochastic belief dynamics. The resulting algorithm, which we name SACBP, does not require discretization of spaces or time and synthesizes control signals in near real-time. SACBP is an anytime algorithm that can handle general parametric Bayesian filters under certain assumptions. We demonstrate the effectiveness of our approach in an active sensing scenario and a model-based Bayesian reinforcement learning problem. In these challenging problems, we show that the algorithm significantly outperforms other existing solution techniques including approximate dynamic programming and local trajectory optimization.

* submitted to Internatinoal Journal of Robotics Research (IJRR) 
Viaarxiv icon