Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

A Study of the Minimum Safe Distance between Human Driven and Driverless Cars Using Safe Distance Model

Jun 12, 2020
Tesfaye Hailemariam Yimer, Chao Wen, Xiaozhuo Yu, Chaozhe Jiang

When driving,it is vital to maintain the right following distance between the vehicles to avoid rear-end collisions. The minimum safe distance depends on many factors, however, in this study the safe distance between the human-driven vehicles and a fully autonomous vehicle at a sudden stop by an automatic emergency brake was studied based on the human driver ability to react in an accident, the vehicles' braking system performance, and the speed of vehicles. For this approach, a safe distance car-following model was proposed to describe the safe distance between vehicles on a single lane dry road under conditions where both vehicles keep moving at a constant speed, and a lead autonomous vehicle suddenly stops by automatic emergency braking at an imminent incident. The proposed model then finally was being tested using MATLAB simulation, and results showed that confirmed the effectiveness of this model and the influence of driving speed and inter-vehicle distance on the rear-end collision was also indicated as well compared with the two and three seconds rule of safe following distance. The three seconds safe distance following rules is safe to be applied for all speed limits; however, the two seconds can be used on speed limits up to 45 Km/hr. A noticeable increase in rear-end collision was observed according to the simulation results if a car follows a driverless vehicle with two seconds rule above 45 km/hr.

* 15 pages, 5 figures 
  
Access Paper or Ask Questions

Towards the Verification of Safety-critical Autonomous Systems in Dynamic Environments

Dec 15, 2016
Adina Aniculaesei, Daniel Arnsberger, Falk Howar, Andreas Rausch

There is an increasing necessity to deploy autonomous systems in highly heterogeneous, dynamic environments, e.g. service robots in hospitals or autonomous cars on highways. Due to the uncertainty in these environments, the verification results obtained with respect to the system and environment models at design-time might not be transferable to the system behavior at run time. For autonomous systems operating in dynamic environments, safety of motion and collision avoidance are critical requirements. With regard to these requirements, Macek et al. [6] define the passive safety property, which requires that no collision can occur while the autonomous system is moving. To verify this property, we adopt a two phase process which combines static verification methods, used at design time, with dynamic ones, used at run time. In the design phase, we exploit UPPAAL to formalize the autonomous system and its environment as timed automata and the safety property as TCTL formula and to verify the correctness of these models with respect to this property. For the runtime phase, we build a monitor to check whether the assumptions made at design time are also correct at run time. If the current system observations of the environment do not correspond to the initial system assumptions, the monitor sends feedback to the system and the system enters a passive safe state.

* EPTCS 232, 2016, pp. 79-90 
* In Proceedings V2CPS-16, arXiv:1612.04023 
  
Access Paper or Ask Questions

An Intelligent Safety System for Human-Centered Semi-Autonomous Vehicles

Feb 20, 2019
Hadi Abdi Khojasteh, Alireza Abbas Alipour, Ebrahim Ansari, Parvin Razzaghi

Nowadays, automobile manufacturers make efforts to develop ways to make cars fully safe. Monitoring driver's actions by computer vision techniques to detect driving mistakes in real-time and then planning for autonomous driving to avoid vehicle collisions is one of the most important issues that has been investigated in the machine vision and Intelligent Transportation Systems (ITS). The main goal of this study is to prevent accidents caused by fatigue, drowsiness, and driver distraction. To avoid these incidents, this paper proposes an integrated safety system that continuously monitors the driver's attention and vehicle surroundings, and finally decides whether the actual steering control status is safe or not. For this purpose, we equipped an ordinary car called FARAZ with a vision system consisting of four mounted cameras along with a universal car tool for communicating with surrounding factory-installed sensors and other car systems, and sending commands to actuators. The proposed system leverages a scene understanding pipeline using deep convolutional encoder-decoder networks and a driver state detection pipeline. We have been identifying and assessing domestic capabilities for the development of technologies specifically of the ordinary vehicles in order to manufacture smart cars and eke providing an intelligent system to increase safety and to assist the driver in various conditions/situations.

* 15 pages and 5 figures, Submitted to the international conference on Contemporary issues in Data Science (CiDaS 2019), Learn more about this project at https://iasbs.ac.ir/~ansari/faraz 
  
Access Paper or Ask Questions

CODA: A Real-World Road Corner Case Dataset for Object Detection in Autonomous Driving

Mar 15, 2022
Kaican Li, Kai Chen, Haoyu Wang, Lanqing Hong, Chaoqiang Ye, Jianhua Han, Yukuai Chen, Wei Zhang, Chunjing Xu, Dit-Yan Yeung, Xiaodan Liang, Zhenguo Li, Hang Xu

Contemporary deep-learning object detection methods for autonomous driving usually assume prefixed categories of common traffic participants, such as pedestrians and cars. Most existing detectors are unable to detect uncommon objects and corner cases (e.g., a dog crossing a street), which may lead to severe accidents in some situations, making the timeline for the real-world application of reliable autonomous driving uncertain. One main reason that impedes the development of truly reliably self-driving systems is the lack of public datasets for evaluating the performance of object detectors on corner cases. Hence, we introduce a challenging dataset named CODA that exposes this critical problem of vision-based detectors. The dataset consists of 1500 carefully selected real-world driving scenes, each containing four object-level corner cases (on average), spanning 30+ object categories. On CODA, the performance of standard object detectors trained on large-scale autonomous driving datasets significantly drops to no more than 12.8% in mAR. Moreover, we experiment with the state-of-the-art open-world object detector and find that it also fails to reliably identify the novel objects in CODA, suggesting that a robust perception system for autonomous driving is probably still far from reach. We expect our CODA dataset to facilitate further research in reliable detection for real-world autonomous driving. Our dataset will be released at https://coda-dataset.github.io.

  
Access Paper or Ask Questions

Learning to Drive Small Scale Cars from Scratch

Aug 03, 2020
Ari Viitala, Rinu Boney, Juho Kannala

We consider the problem of learning to drive low-cost small scale cars using reinforcement learning. It is challenging to handle the long-tailed distributions of events in the real-world with handcrafted logical rules and reinforcement learning could be a potentially more scalable solution to deal with them. We adopt an existing platform called Donkey car for low-cost repeatable and reproducible research in autonomous driving. We consider the task of learning to drive around a track, given only monocular image observations from an on-board camera. We demonstrate that the soft actor-critic algorithm combined with state representation learning using a variational autoencoder can learn to drive around randomly generated tracks on the Donkey car simulator and a real-world track using the Donkey car platform. Our agent can learn from scratch using sparse and noisy rewards within just 10 minutes of driving experience.

  
Access Paper or Ask Questions

Review on 3D Lidar Localization for Autonomous Driving Cars

Jun 01, 2020
Mahdi Elhousni, Xinming Huang

LIDAR sensors are bound to become one the core sensors in achieving full autonomy for self driving cars. LIDARs are able to produce rich, dense and precise spatial data, which can tremendously help in localizing and tracking a moving vehicle. In this paper, we review the latest finding in 3D LIDAR localization for autonomous driving cars, and analyze the results obtained by each method, in an effort to guide the research community towards the path that seems to be the most promising.

* Accepted by IV2020 
  
Access Paper or Ask Questions

Deep Grid Net (DGN): A Deep Learning System for Real-Time Driving Context Understanding

Jan 16, 2019
Liviu Marina, Bogdan Trasnea, Cocias Tiberiu, Andrei Vasilcoi, Florin Moldoveanu, Sorin Grigorescu

Grid maps obtained from fused sensory information are nowadays among the most popular approaches for motion planning for autonomous driving cars. In this paper, we introduce Deep Grid Net (DGN), a deep learning (DL) system designed for understanding the context in which an autonomous car is driving. DGN incorporates a learned driving environment representation based on Occupancy Grids (OG) obtained from raw Lidar data and constructed on top of the Dempster-Shafer (DS) theory. The predicted driving context is further used for switching between different driving strategies implemented within EB robinos, Elektrobit's Autonomous Driving (AD) software platform. Based on genetic algorithms (GAs), we also propose a neuroevolutionary approach for learning the tuning hyperparameters of DGN. The performance of the proposed deep network has been evaluated against similar competing driving context estimation classifiers.

* Int. Conf. on Robotic Computing IRC 2019, Naples, Italy, February 25-27, 2019 
  
Access Paper or Ask Questions

Autonomous Vehicles that Interact with Pedestrians: A Survey of Theory and Practice

May 30, 2018
Amir Rasouli, John K. Tsotsos

One of the major challenges that autonomous cars are facing today is driving in urban environments. To make it a reality, autonomous vehicles require the ability to communicate with other road users and understand their intentions. Such interactions are essential between the vehicles and pedestrians as the most vulnerable road users. Understanding pedestrian behavior, however, is not intuitive and depends on various factors such as demographics of the pedestrians, traffic dynamics, environmental conditions, etc. In this paper, we identify these factors by surveying pedestrian behavior studies, both the classical works on pedestrian-driver interaction and the modern ones that involve autonomous vehicles. To this end, we will discuss various methods of studying pedestrian behavior, and analyze how the factors identified in the literature are interrelated. We will also review the practical applications aimed at solving the interaction problem including design approaches for autonomous vehicles that communicate with pedestrians and visual perception and reasoning algorithms tailored to understanding pedestrian intention. Based on our findings, we will discuss the open problems and propose future research directions.

* This work has been submitted to the IEEE Transactions on Intelligent Transportation Systems 
  
Access Paper or Ask Questions

Model-based Decision Making with Imagination for Autonomous Parking

Aug 25, 2021
Ziyue Feng, Yu Chen, Shitao Chen, Nanning Zheng

Autonomous parking technology is a key concept within autonomous driving research. This paper will propose an imaginative autonomous parking algorithm to solve issues concerned with parking. The proposed algorithm consists of three parts: an imaginative model for anticipating results before parking, an improved rapid-exploring random tree (RRT) for planning a feasible trajectory from a given start point to a parking lot, and a path smoothing module for optimizing the efficiency of parking tasks. Our algorithm is based on a real kinematic vehicle model; which makes it more suitable for algorithm application on real autonomous cars. Furthermore, due to the introduction of the imagination mechanism, the processing speed of our algorithm is ten times faster than that of traditional methods, permitting the realization of real-time planning simultaneously. In order to evaluate the algorithm's effectiveness, we have compared our algorithm with traditional RRT, within three different parking scenarios. Ultimately, results show that our algorithm is more stable than traditional RRT and performs better in terms of efficiency and quality.

* 2018 IEEE Intelligent Vehicles Symposium (IV) (pp. 2216-2223). IEEE 
* Published by IEEE IV 2018 
  
Access Paper or Ask Questions
<<
2
3
4
5
6
7
8
9
10
11
12
13
14
>>