Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Prediction Surface Uncertainty Quantification in Object Detection Models for Autonomous Driving

Jul 11, 2021
Ferhat Ozgur Catak, Tao Yue, Shaukat Ali

Object detection in autonomous cars is commonly based on camera images and Lidar inputs, which are often used to train prediction models such as deep artificial neural networks for decision making for object recognition, adjusting speed, etc. A mistake in such decision making can be damaging; thus, it is vital to measure the reliability of decisions made by such prediction models via uncertainty measurement. Uncertainty, in deep learning models, is often measured for classification problems. However, deep learning models in autonomous driving are often multi-output regression models. Hence, we propose a novel method called PURE (Prediction sURface uncErtainty) for measuring prediction uncertainty of such regression models. We formulate the object recognition problem as a regression model with more than one outputs for finding object locations in a 2-dimensional camera view. For evaluation, we modified three widely-applied object recognition models (i.e., YoLo, SSD300 and SSD512) and used the KITTI, Stanford Cars, Berkeley DeepDrive, and NEXET datasets. Results showed the statistically significant negative correlation between prediction surface uncertainty and prediction accuracy suggesting that uncertainty significantly impacts the decisions made by autonomous driving.

* Accepted in AITest 2021, The Third IEEE International Conference On Artificial Intelligence Testing 
  

Real-time 3D Traffic Cone Detection for Autonomous Driving

Feb 06, 2019
Ankit Dhall, Dengxin Dai, Luc Van Gool

Considerable progress has been made in semantic scene understanding of road scenes with monocular cameras. It is, however, mainly related to certain classes such as cars and pedestrians. This work investigates traffic cones, an object class crucial for traffic control in the context of autonomous vehicles. 3D object detection using images from a monocular camera is intrinsically an ill-posed problem. In this work, we leverage the unique structure of traffic cones and propose a pipelined approach to the problem. Specifically, we first detect cones in images by a tailored 2D object detector; then, the spatial arrangement of keypoints on a traffic cone are detected by our deep structural regression network, where the fact that the cross-ratio is projection invariant is leveraged for network regularization; finally, the 3D position of cones is recovered by the classical Perspective n-Point algorithm. Extensive experiments show that our approach can accurately detect traffic cones and estimate their position in the 3D world in real time. The proposed method is also deployed on a real-time, critical system. It runs efficiently on the low-power Jetson TX2, providing accurate 3D position estimates, allowing a race-car to map and drive autonomously on an unseen track indicated by traffic cones. With the help of robust and accurate perception, our race-car won both Formula Student Competitions held in Italy and Germany in 2018, cruising at a top-speed of 54 kmph. Visualization of the complete pipeline, mapping and navigation can be found on our project page.

* 8 pages, 11 figures. arXiv admin note: substantial text overlap with arXiv:1809.10548 
  

Application of Neuroevolution in Autonomous Cars

Jun 26, 2020
Sainath G, Vignesh S, Siddarth S, G Suganya

With the onset of Electric vehicles, and them becoming more and more popular, autonomous cars are the future in the travel/driving experience. The barrier to reaching level 5 autonomy is the difficulty in the collection of data that incorporates good driving habits and the lack thereof. The problem with current implementations of self-driving cars is the need for massively large datasets and the need to evaluate the driving in the dataset. We propose a system that requires no data for its training. An evolutionary model would have the capability to optimize itself towards the fitness function. We have implemented Neuroevolution, a form of genetic algorithm, to train/evolve self-driving cars in a simulated virtual environment with the help of Unreal Engine 4, which utilizes Nvidia's PhysX Physics Engine to portray real-world vehicle dynamics accurately. We were able to observe the serendipitous nature of evolution and have exploited it to reach our optimal solution. We also demonstrate the ease in generalizing attributes brought about by genetic algorithms and how they may be used as a boilerplate upon which other machine learning techniques may be used to improve the overall driving experience.

* 13 pages, 9 figures, 1 table 
  

Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning

Aug 18, 2020
Florian Fuchs, Yunlong Song, Elia Kaufmann, Davide Scaramuzza, Peter Duerr

Autonomous car racing raises fundamental robotics challenges such as planning minimum-time trajectories under uncertain dynamics and controlling the car at its friction limits. In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.

  

Optimizing Passenger Comfort in Cost Functions for Trajectory Planning

Nov 16, 2018
Jean Elsner

Current advances in the development of autonomous cars suggest that driverless cars may see wide-scale deployment in the near future. Research by both industry and academia is driven by potential benefits of this new technology, including reductions in fatalities and improvements in traffic and fuel efficiency as well as greater mobility for people who will or cannot drive cars themselves. A deciding factor for the adoption of self-driving cars besides safety will be the comfort of the passengers. This report looks at cost functions currently used in motion planning methods for autonomous on-road driving. Specifically, how the human perception of how comfortable a trajectory is can be formulated within cost functions.

* 7 pages, 9 figures 
  

Fast and Real-time End to End Control in Autonomous Racing Cars Through Representation Learning

Nov 30, 2021
Praveen Venkatesh, Rwik Rana, Harish PM

The challenges presented in an autonomous racing situation are distinct from those faced in regular autonomous driving and require faster end-to-end algorithms and consideration of a longer horizon in determining optimal current actions keeping in mind upcoming maneuvers and situations. In this paper, we propose an end-to-end method for autonomous racing that takes in as inputs video information from an onboard camera and determines final steering and throttle control actions. We use the following split to construct such a method (1) learning a low dimensional representation of the scene, (2) pre-generating the optimal trajectory for the given scene, and (3) tracking the predicted trajectory using a classical control method. In learning a low-dimensional representation of the scene, we use intermediate representations with a novel unsupervised trajectory planner to generate expert trajectories, and hence utilize them to directly predict race lines from a given front-facing input image. Thus, the proposed algorithm employs the best of two worlds - the robustness of learning-based approaches to perception and the accuracy of optimization-based approaches for trajectory generation in an end-to-end learning-based framework. We deploy and demonstrate our framework on CARLA, a photorealistic simulator for testing self-driving cars in realistic environments.

  

Key Ingredients of Self-Driving Cars

Jun 07, 2019
Rui Fan, Jianhao Jiao, Haoyang Ye, Yang Yu, Ioannis Pitas, Ming Liu

Over the past decade, many research articles have been published in the area of autonomous driving. However, most of them focus only on a specific technological area, such as visual environment perception, vehicle control, etc. Furthermore, due to fast advances in the self-driving car technology, such articles become obsolete very fast. In this paper, we give a brief but comprehensive overview on key ingredients of autonomous cars (ACs), including driving automation levels, AC sensors, AC software, open source datasets, industry leaders, AC applications and existing challenges.

* 5 pages, 2 figures, EUSIPCO 2019 Satellite Workshop: Signal Processing, Computer Vision and Deep Learning for Autonomous Systems 
  

On a Formal Model of Safe and Scalable Self-driving Cars

Oct 27, 2018
Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua

In recent years, car makers and tech companies have been racing towards self driving cars. It seems that the main parameter in this race is who will have the first car on the road. The goal of this paper is to add to the equation two additional crucial parameters. The first is standardization of safety assurance --- what are the minimal requirements that every self-driving car must satisfy, and how can we verify these requirements. The second parameter is scalability --- engineering solutions that lead to unleashed costs will not scale to millions of cars, which will push interest in this field into a niche academic corner, and drive the entire field into a "winter of autonomous driving". In the first part of the paper we propose a white-box, interpretable, mathematical model for safety assurance, which we call Responsibility-Sensitive Safety (RSS). In the second part we describe a design of a system that adheres to our safety assurance requirements and is scalable to millions of cars.

  

Motion Prediction on Self-driving Cars: A Review

Nov 06, 2020
Shahrokh Paravarzar, Belqes Mohammad

The autonomous vehicle motion prediction literature is reviewed. Motion prediction is the most challenging task in autonomous vehicles and self-drive cars. These challenges have been discussed. Later on, the state-of-theart has reviewed based on the most recent literature and the current challenges are discussed. The state-of-the-art consists of classical and physical methods, deep learning networks, and reinforcement learning. prons and cons of the methods and gap of the research presented in this review. Finally, the literature surrounding object tracking and motion will be presented. As a result, deep reinforcement learning is the best candidate to tackle self-driving cars.

  

Giving Commands to a Self-Driving Car: How to Deal with Uncertain Situations?

Jun 08, 2021
Thierry Deruyttere, Victor Milewski, Marie-Francine Moens

Current technology for autonomous cars primarily focuses on getting the passenger from point A to B. Nevertheless, it has been shown that passengers are afraid of taking a ride in self-driving cars. One way to alleviate this problem is by allowing the passenger to give natural language commands to the car. However, the car can misunderstand the issued command or the visual surroundings which could lead to uncertain situations. It is desirable that the self-driving car detects these situations and interacts with the passenger to solve them. This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it. Optionally, a question generated by the system describing the uncertain objects is included. We argue that if the car could explain the objects in a human-like way, passengers could gain more confidence in the car's abilities. Thus, we investigate how to (1) detect uncertain situations and their underlying causes, and (2) how to generate clarifying questions for the passenger. When evaluating on the Talk2Car dataset, we show that the proposed model, \acrfull{pipeline}, improves \gls{m:ambiguous-absolute-increase} in terms of $IoU_{.5}$ compared to not using \gls{pipeline}. Furthermore, we designed a referring expression generator (REG) \acrfull{reg_model} tailored to a self-driving car setting which yields a relative improvement of \gls{m:meteor-relative} METEOR and \gls{m:rouge-relative} ROUGE-l compared with state-of-the-art REG models, and is three times faster.

* Accepted in Engineering Applications of Artificial Intelligence (EAAI) journal 
  
<<
1
2
3
4
5
6
7
8
9
10
11
12
>>