Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Do You Want Your Autonomous Car To Drive Like You?

Feb 05, 2018
Chandrayee Basu, Qian Yang, David Hungerman, Mukesh Singhal, Anca D. Dragan

With progress in enabling autonomous cars to drive safely on the road, it is time to start asking how they should be driving. A common answer is that they should be adopting their users' driving style. This makes the assumption that users want their autonomous cars to drive like they drive - aggressive drivers want aggressive cars, defensive drivers want defensive cars. In this paper, we put that assumption to the test. We find that users tend to prefer a significantly more defensive driving style than their own. Interestingly, they prefer the style they think is their own, even though their actual driving style tends to be more aggressive. We also find that preferences do depend on the specific driving scenario, opening the door for new ways of learning driving style preference.

* 8 pages, 7 figures, HRI 2017 
  
Access Paper or Ask Questions

A Dispersed Federated Learning Framework for 6G-Enabled Autonomous Driving Cars

May 20, 2021
Latif U. Khan, Yan Kyaw Tun, Madyan Alsenwi, Muhammad Imran, Zhu Han, Choong Seon Hong

Sixth-Generation (6G)-based Internet of Everything applications (e.g. autonomous driving cars) have witnessed a remarkable interest. Autonomous driving cars using federated learning (FL) has the ability to enable different smart services. Although FL implements distributed machine learning model training without the requirement to move the data of devices to a centralized server, it its own implementation challenges such as robustness, centralized server security, communication resources constraints, and privacy leakage due to the capability of a malicious aggregation server to infer sensitive information of end-devices. To address the aforementioned limitations, a dispersed federated learning (DFL) framework for autonomous driving cars is proposed to offer robust, communication resource-efficient, and privacy-aware learning. A mixed-integer non-linear (MINLP) optimization problem is formulated to jointly minimize the loss in federated learning model accuracy due to packet errors and transmission latency. Due to the NP-hard and non-convex nature of the formulated MINLP problem, we propose the Block Successive Upper-bound Minimization (BSUM) based solution. Furthermore, the performance comparison of the proposed scheme with three baseline schemes has been carried out. Extensive numerical results are provided to show the validity of the proposed BSUM-based scheme.

  
Access Paper or Ask Questions

Multi-Agent Car Parking using Reinforcement Learning

Jun 22, 2022
Omar Tanner

As the industry of autonomous driving grows, so does the potential interaction of groups of autonomous cars. Combined with the advancement of Artificial Intelligence and simulation, such groups can be simulated, and safety-critical models can be learned controlling the cars within. This study applies reinforcement learning to the problem of multi-agent car parking, where groups of cars aim to efficiently park themselves, while remaining safe and rational. Utilising robust tools and machine learning frameworks, we design and implement a flexible car parking environment in the form of a Markov decision process with independent learners, exploiting multi-agent communication. We implement a suite of tools to perform experiments at scale, obtaining models parking up to 7 cars with over a 98.1% success rate, significantly beating existing single-agent models. We also obtain several results relating to competitive and collaborative behaviours exhibited by the cars in our environment, with varying densities and levels of communication. Notably, we discover a form of collaboration that cannot arise without competition, and a 'leaky' form of collaboration whereby agents collaborate without sufficient state. Such work has numerous potential applications in the autonomous driving and fleet management industries, and provides several useful techniques and benchmarks for the application of reinforcement learning to multi-agent car parking.

  
Access Paper or Ask Questions

Road Context-aware Intrusion Detection System for Autonomous Cars

Aug 02, 2019
Jingxuan Jiang, Chundong Wang, Sudipta Chattopadhyay, Wei Zhang

Security is of primary importance to vehicles. The viability of performing remote intrusions onto the in-vehicle network has been manifested. In regard to unmanned autonomous cars, limited work has been done to detect intrusions for them while existing intrusion detection systems (IDSs) embrace limitations against strong adversaries. In this paper, we consider the very nature of autonomous car and leverage the road context to build a novel IDS, named Road context-aware IDS (RAIDS). When a computer-controlled car is driving through continuous roads, road contexts and genuine frames transmitted on the car's in-vehicle network should resemble a regular and intelligible pattern. RAIDS hence employs a lightweight machine learning model to extract road contexts from sensory information (e.g., camera images and distance sensor values) that are used to generate control signals for maneuvering the car. With such ongoing road context, RAIDS validates corresponding frames observed on the in-vehicle network. Anomalous frames that substantially deviate from road context will be discerned as intrusions. We have implemented a prototype of RAIDS with neural networks, and conducted experiments on a Raspberry Pi with extensive datasets and meaningful intrusion cases. Evaluations show that RAIDS significantly outperforms state-of-the-art IDS without using road context by up to 99.9% accuracy and short response time.

* This manuscript presents an intrusion detection system that makes use of road context for autonomous cars 
  
Access Paper or Ask Questions

Integrating Imitation Learning with Human Driving Data into Reinforcement Learning to Improve Training Efficiency for Autonomous Driving

Nov 23, 2021
Heidi Lu

Two current methods used to train autonomous cars are reinforcement learning and imitation learning. This research develops a new learning methodology and systematic approach in both a simulated and a smaller real world environment by integrating supervised imitation learning into reinforcement learning to make the RL training data collection process more effective and efficient. By combining the two methods, the proposed research successfully leverages the advantages of both RL and IL methods. First, a real mini-scale robot car was assembled and trained on a 6 feet by 9 feet real world track using imitation learning. During the process, a handle controller was used to control the mini-scale robot car to drive on the track by imitating a human expert driver and manually recorded the actions using Microsoft AirSim's API. 331 accurate human-like reward training samples were able to be generated and collected. Then, an agent was trained in the Microsoft AirSim simulator using reinforcement learning for 6 hours with the initial 331 reward data inputted from imitation learning training. After a 6-hour training period, the mini-scale robot car was able to successfully drive full laps around the 6 feet by 9 feet track autonomously while the mini-scale robot car was unable to complete one full lap round the track even after 30 hour training pure RL training. With 80% less training time, the new methodology produced significantly more average rewards per hour. Thus, the new methodology was able to save a significant amount of training time and can be used to accelerate the adoption of RL in autonomous driving, which would help produce more efficient and better results in the long run when applied to real life scenarios. Key Words: Reinforcement Learning (RL), Imitation Learning (IL), Autonomous Driving, Human Driving Data, CNN

* 17 pages, 5 figures 
  
Access Paper or Ask Questions

An NCAP-like Safety Indicator for Self-Driving Cars

Apr 02, 2021
Jimy Cai Huang, Hanna Kurniawati

This paper proposes a mechanism to assess the safety of autonomous cars. It assesses the car's safety in scenarios where the car must avoid collision with an adversary. Core to this mechanism is a safety measure, called Safe-Kamikaze Distance (SKD), which computes the average similarity between sets of safe adversary's trajectories and kamikaze trajectories close to the safe trajectories. The kamikaze trajectories are generated based on planning under uncertainty techniques, namely the Partially Observable Markov Decision Processes, to account for the partially observed car policy from the point of view of the adversary. We found that SKD is inversely proportional to the upper bound on the probability that a small deformation changes a collision-free trajectory of the adversary into a colliding one. We perform systematic tests on a scenario where the adversary is a pedestrian crossing a single-lane road in front of the car being assessed --which is, one of the scenarios in the Euro-NCAP's Vulnerable Road User (VRU) tests on Autonomous Emergency Braking. Simulation results on assessing cars with basic controllers and a test on a Machine-Learning controller using a high-fidelity simulator indicates promising results for SKD to measure the safety of autonomous cars. Moreover, the time taken for each simulation test is under 11 seconds, enabling a sufficient statistics to compute SKD from simulation to be generated on a quad-core desktop in less than 25 minutes.

  
Access Paper or Ask Questions

Learning How to Dynamically Route Autonomous Vehicles on Shared Roads

Sep 09, 2019
Daniel A. Lazar, Erdem Bıyık, Dorsa Sadigh, Ramtin Pedarsani

Road congestion induces significant costs across the world, and road network disturbances, such as traffic accidents, can cause highly congested traffic patterns. If a planner had control over the routing of all vehicles in the network, they could easily reverse this effect. In a more realistic scenario, we consider a planner that controls autonomous cars, which are a fraction of all present cars. We study a dynamic routing game, in which the route choices of autonomous cars can be controlled and the human drivers react selfishly and dynamically to autonomous cars' actions. As the problem is prohibitively large, we use deep reinforcement learning to learn a policy for controlling the autonomous vehicles. This policy influences human drivers to route themselves in such a way that minimizes congestion on the network. To gauge the effectiveness of our learned policies, we establish theoretical results characterizing equilibria on a network of parallel roads and empirically compare the learned policy results with best possible equilibria. Moreover, we show that in the absence of these policies, high demands and network perturbations would result in large congestion, whereas using the policy greatly decreases the travel times by minimizing the congestion. To the best of our knowledge, this is the first work that employs deep reinforcement learning to reduce congestion by influencing humans' routing decisions in mixed-autonomy traffic.

  
Access Paper or Ask Questions

Two-timescale Mechanism-and-Data-Driven Control for Aggressive Driving of Autonomous Cars

Sep 11, 2021
Yiwen Lu, Bo Yang, Yilin Mo

The control for aggressive driving of autonomous cars is challenging due to the presence of significant tyre slip. Data-driven and mechanism-based methods for the modeling and control of autonomous cars under aggressive driving conditions are limited in data efficiency and adaptability respectively. This paper is an attempt toward the fusion of the two classes of methods. By means of a modular design that is consisted of mechanism-based and data-driven components, and aware of the two-timescale phenomenon in the car model, our approach effectively improves over previous methods in terms of data efficiency, ability of transfer and final performance. The hybrid mechanism-and-data-driven approach is verified on TORCS (The Open Racing Car Simulator). Experiment results demonstrate the benefit of our approach over purely mechanism-based and purely data-driven methods.

  
Access Paper or Ask Questions

Autonomous Driving without a Burden: View from Outside with Elevated LiDAR

Oct 31, 2018
Nalin Jayaweera, Nandana Rajatheva, Matti Latva-aho

The current autonomous driving architecture places a heavy burden in signal processing for the graphics processing units (GPUs) in the car. This directly translates into battery drain and lower energy efficiency, crucial factors in electric vehicles. This is due to the high bit rate of the captured video and other sensing inputs, mainly due to Light Detection and Ranging (LiDAR) sensor at the top of the car which is an essential feature in autonomous vehicles. LiDAR is needed to obtain a high precision map for the vehicle AI to make relevant decisions. However, this is still a quite restricted view from the car. This is the same even in the case of cars without a LiDAR such as Tesla. The existing LiDARs and the cameras have limited horizontal and vertical fields of visions. In all cases it can be argued that precision is lower, given the smaller map generated. This also results in the accumulation of a large amount of data in the order of several TBs in a day, the storage of which becomes challenging. If we are to reduce the effort for the processing units inside the car, we need to uplink the data to edge or an appropriately placed cloud. However, the required data rates in the order of several Gbps are difficult to be met even with the advent of 5G. Therefore, we propose to have a coordinated set of LiDAR's outside at an elevation which can provide an integrated view with a much larger field of vision (FoV) to a centralized decision making body which then sends the required control actions to the vehicles with a lower bit rate in the downlink and with the required latency. The calculations we have based on industry standard equipment from several manufacturers show that this is not just a concept but a feasible system which can be implemented.The proposed system can play a supportive role with existing autonomous vehicle architecture and it is easily applicable in an urban area.

  
Access Paper or Ask Questions

Verisimilar Percept Sequences Tests for Autonomous Driving Intelligent Agent Assessment

May 07, 2018
Thomio Watanabe, Denis Wolf

The autonomous car technology promises to replace human drivers with safer driving systems. But although autonomous cars can become safer than human drivers this is a long process that is going to be refined over time. Before these vehicles are deployed on urban roads a minimum safety level must be assured. Since the autonomous car technology is still under development there is no standard methodology to evaluate such systems. It is important to completely understand the technology that is being developed to design efficient means to evaluate it. In this paper we assume safety-critical systems reliability as a safety measure. We model an autonomous road vehicle as an intelligent agent and we approach its evaluation from an artificial intelligence perspective. Our focus is the evaluation of perception and decision making systems and also to propose a systematic method to evaluate their integration in the vehicle. We identify critical aspects of the data dependency from the artificial intelligence state of the art models and we also propose procedures to reproduce them.

  
Access Paper or Ask Questions
<<
1
2
3
4
5
6
7
8
>>