In human-robot interaction (HRI) systems, such as autonomous vehicles, understanding and representing human behavior are important. Human behavior is naturally rich and diverse. Cost/reward learning, as an efficient way to learn and represent human behavior, has been successfully applied in many domains. Most of traditional inverse reinforcement learning (IRL) algorithms, however, cannot adequately capture the diversity of human behavior since they assume that all behavior in a given dataset is generated by a single cost function.In this paper, we propose a probabilistic IRL framework that directly learns a distribution of cost functions in continuous domain. Evaluations on both synthetic data and real human driving data are conducted. Both the quantitative and subjective results show that our proposed framework can better express diverse human driving behaviors, as well as extracting different driving styles that match what human participants interpret in our user study.
The performance achieved with traditional model-based control system design approaches typically relies heavily upon accurate modeling of the motion dynamics. However, modeling the true dynamics of present-day increasingly complex systems can be an extremely challenging task; and the usually necessary practical approximations often render the automation system to operate in a non-optimal condition. This problem can be greatly aggravated in the case of a multi-axis magnetically-levitated nanopositioning system where the fully floating behavior and multi-axis coupling make extremely accurate identification of the motion dynamics largely impossible. On the other hand, in many related industrial automation applications, e.g., the scanning process with the maglev system, repetitive motions are involved which could generate a large amount of motion data under non-optimal conditions. These motion data essentially contain rich information; therefore, the possibility exists to develop an intelligent automation system to learn from these motion data and to drive the system to operate towards optimality in a data-driven manner. Along this line then, this paper proposes a data-driven controller optimization approach that learns from the past non-optimal motion data to iteratively improve the motion control performance. Specifically, a novel data-driven multi-objective optimization approach is proposed that is able to automatically estimate the gradient and Hessian purely based on the measured motion data; the multi-objective cost function is suitably designed to take into account both smooth and accurate trajectory tracking. Experiments are then conducted on the maglev nanopositioning system to demonstrate the effectiveness of the proposed method, and the results show rather clearly the practical appeal of our methodology for related complex robotic systems with no accurate model available.
Computer vision has achieved great success using standardized image representations -- pixel arrays, and the corresponding deep learning operators -- convolutions. In this work, we challenge this paradigm: we instead (a) represent images as a set of visual tokens and (b) apply visual transformers to find relationships between visual semantic concepts. Given an input image, we dynamically extract a set of visual tokens from the image to obtain a compact representation for high-level semantics. We then use visual transformers to operate over the visual tokens to densely model relationships between them. We find that this paradigm of token-based image representation and processing drastically outperforms its convolutional counterparts on image classification and semantic segmentation. To demonstrate the power of this approach on ImageNet classification, we use ResNet as a convenient baseline and use visual transformers to replace the last stage of convolutions. This reduces the stage's MACs by up to 6.9x, while attaining up to 4.53 points higher top-1 accuracy. For semantic segmentation, we use a visual-transformer-based FPN (VT-FPN) module to replace a convolution-based FPN, saving 6.5x fewer MACs while achieving up to 0.35 points higher mIoU on LIP and COCO-stuff.
In this paper, we propose a novel form of the loss function to increase the performance of LiDAR-based 3d object detection and obtain more explainable and convincing uncertainty for the prediction. The loss function was designed using corner transformation and uncertainty modeling. With the new loss function, the performance of our method on the val split of KITTI dataset shows up to a 15% increase in terms of Average Precision (AP) comparing with the baseline using simple L1 Loss. In the study of the characteristics of predicted uncertainties, we find that generally more accurate prediction of the bounding box is usually accompanied by lower uncertainty. The distribution of corner uncertainties agrees on the distribution of the point cloud in the bounding box, which means the corner with denser observed points has lower uncertainty. Moreover, our method also learns the constraint from the cuboid geometry of the bounding box in uncertainty prediction. Finally, we propose an efficient Bayesian updating method to recover the uncertainty for the original parameters of the bounding boxes which can help to provide probabilistic results for the planning module.
In the past decades, we have witnessed significant progress in the domain of autonomous driving. Advanced techniques based on optimization and reinforcement learning (RL) become increasingly powerful at solving the forward problem: given designed reward/cost functions, how should we optimize them and obtain driving policies that interact with the environment safely and efficiently. Such progress has raised another equally important question: \emph{what should we optimize}? Instead of manually specifying the reward functions, it is desired that we can extract what human drivers try to optimize from real traffic data and assign that to autonomous vehicles to enable more naturalistic and transparent interaction between humans and intelligent agents. To address this issue, we present an efficient sampling-based maximum-entropy inverse reinforcement learning (IRL) algorithm in this paper. Different from existing IRL algorithms, by introducing an efficient continuous-domain trajectory sampler, the proposed algorithm can directly learn the reward functions in the continuous domain while considering the uncertainties in demonstrated trajectories from human drivers. We evaluate the proposed algorithm on real driving data, including both non-interactive and interactive scenarios. The experimental results show that the proposed algorithm achieves more accurate prediction performance with faster convergence speed and better generalization compared to other baseline IRL algorithms.
Rectifier (ReLU) deep neural networks (DNN) and their connection with piecewise affine (PWA) functions is analyzed. The paper is an effort to find and study the possibility of representing explicit state feedback policy of model predictive control (MPC) as a ReLU DNN, and vice versa. The complexity and architecture of DNN has been examined through some theorems and discussions. An approximate method has been developed for identification of input-space in ReLU net which results a PWA function over polyhedral regions. Also, inverse multiparametric linear or quadratic programs (mp-LP or mp-QP) has been studied which deals with reconstruction of constraints and cost function given a PWA function.
In this paper, we continue our prior work on using imitation learning (IL) and model free reinforcement learning (RL) to learn driving policies for autonomous driving in urban scenarios, by introducing a model based RL method to drive the autonomous vehicle in the Carla urban driving simulator. Although IL and model free RL methods have been proved to be capable of solving lots of challenging tasks, including playing video games, robots, and, in our prior work, urban driving, the low sample efficiency of such methods greatly limits their applications on actual autonomous driving. In this work, we developed a model based RL algorithm of guided policy search (GPS) for urban driving tasks. The algorithm iteratively learns a parameterized dynamic model to approximate the complex and interactive driving task, and optimizes the driving policy under the nonlinear approximate dynamic model. As a model based RL approach, when applied in urban autonomous driving, the GPS has the advantages of higher sample efficiency, better interpretability, and greater stability. We provide extensive experiments validating the effectiveness of the proposed method to learn robust driving policy for urban driving in Carla. We also compare the proposed method with other policy search and model free RL baselines, showing 100x better sample efficiency of the GPS based RL method, and also that the GPS based method can learn policies for harder tasks that the baseline methods can hardly learn.
Reinforcement learning methods have been developed to achieve great success in training control policies in various automation tasks. However, a main challenge of the wider application of reinforcement learning in practical automation is that the training process is hard and the pretrained policy networks are hardly reusable in other similar cases. To address this problem, we propose the cascade attribute network (CAN), which utilizes its hierarchical structure to decompose a complicated control policy in terms of the requirement constraints, which we call attributes, encoded in the control tasks. We validated the effectiveness of our proposed method on two robot control scenarios with various add-on attributes. For some control tasks with more than one add-on attribute attribute, by directly assembling the attribute modules in cascade, the CAN can provide ideal control policies in a zero-shot manner.