Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Rain Sensing Automatic Car Wiper Using AT89C51 Microcontroller

Jan 02, 2021
Abhishek Das, Vivek Dhuri, Ranjushree Pal

The turn of the century has seen a tremendous rise in technological advances in the field of automobiles. With 5G technology on its way and the development in the IoT sector, cars will start interacting with each other using V2V communications and become much more autonomous. In this project, an effort is made to move in the same direction by proposing a model for an automatic car wiper system that operates on sensing rain and snow on the windshield of a car. We develop a prototype for our idea by integrating a servo motor and raindrop sensor with an AT89C51 Microcontroller.


A Review of Tracking, Prediction and Decision Making Methods for Autonomous Driving

Sep 17, 2019
Florin Leon, Marius Gavrilescu

This literature review focuses on three important aspects of an autonomous car system: tracking (assessing the identity of the actors such as cars, pedestrians or obstacles in a sequence of observations), prediction (predicting the future motion of surrounding vehicles in order to navigate through various traffic scenarios) and decision making (analyzing the available actions of the ego car and their consequences to the entire driving context). For tracking and prediction, approaches based on (deep) neural networks and other, especially stochastic techniques, are reported. For decision making, deep reinforcement learning algorithms are presented, together with methods used to explore different alternative actions, such as Monte Carlo Tree Search.

* 36 pages, 11 figures 

A Survey on Simulators for Testing Self-Driving Cars

Jan 13, 2021
Prabhjot Kaur, Samira Taghavi, Zhaofeng Tian, Weisong Shi

A rigorous and comprehensive testing plays a key role in training self-driving cars to handle variety of situations that they are expected to see on public roads. The physical testing on public roads is unsafe, costly, and not always reproducible. This is where testing in simulation helps fill the gap, however, the problem with simulation testing is that it is only as good as the simulator used for testing and how representative the simulated scenarios are of the real environment. In this paper, we identify key requirements that a good simulator must have. Further, we provide a comparison of commonly used simulators. Our analysis shows that CARLA and LGSVL simulators are the current state-of-the-art simulators for end to end testing of self-driving cars for the reasons mentioned in this paper. Finally, we also present current challenges that simulation testing continues to face as we march towards building fully autonomous cars.


The Impact of Blocking Cars on Pathloss Within a Platoon: Measurements for 26 GHz Band

Oct 06, 2021
Paweł Kryszkiewicz, Adrian Kliks, Paweł Sroka, Michał Sybis

Platooning is considered to be one of the possible prospective implementations of the autonomous driving concept, where the train-of-cars moves together following the platoon leader's commands. However, the practical realization of this scheme assumes the use of reliable communications between platoon members. In this paper, the results of the measurement experiment have been presented showing the impact of the blocking cars on the signal attenuation. The tests have been carried out for the high-frequency band, i.e. for 26.555 GHz. It has been observed that on one hand side, the attenuation can reach even tens of dB for 2 or 3 blocking cars, but in some locations, the impact of a two-ray propagation mitigates the presence of obstructing vehicles.


Deep Learning and Control Algorithms of Direct Perception for Autonomous Driving

Nov 12, 2019
Der-Hau Lee, Kuan-Lin Chen, Kuan-Han Liou, Chang-Lun Liu, Jinn-Liang Liu

Based on the direct perception paradigm of autonomous driving, we investigate and modify the CNNs (convolutional neural networks) AlexNet and GoogLeNet that map an input image to few perception indicators (heading angle, distances to preceding cars, and distance to road centerline) for estimating driving affordances in highway traffic. We also design a controller with these indicators and the short-range sensor information of TORCS (the open racing car simulator) for driving simulated cars to avoid collisions. We collect a set of images from a TORCS camera in various driving scenarios, train these CNNs using the dataset, test them in unseen traffics, and find that they perform better than earlier algorithms and controllers in terms of training efficiency and driving stability. Source code and data are available on our website.

* 6 pages, 4 figures 

Interactive Decision Making for Autonomous Vehicles in Dense Traffic

Sep 27, 2019
David Isele

Dense urban traffic environments can produce situations where accurate prediction and dynamic models are insufficient for successful autonomous vehicle motion planning. We investigate how an autonomous agent can safely negotiate with other traffic participants, enabling the agent to handle potential deadlocks. Specifically we consider merges where the gap between cars is smaller than the size of the ego vehicle. We propose a game theoretic framework capable of generating and responding to interactive behaviors. Our main contribution is to show how game-tree decision making can be executed by an autonomous vehicle, including approximations and reasoning that make the tree-search computationally tractable. Additionally, to test our model we develop a stochastic rule-based traffic agent capable of generating interactive behaviors that can be used as a benchmark for simulating traffic participants in a crowded merge setting.

* ITSC 2019 

Federated Transfer Reinforcement Learning for Autonomous Driving

Oct 14, 2019
Xinle Liang, Yang Liu, Tianjian Chen, Ming Liu, Qiang Yang

Reinforcement learning (RL) is widely used in autonomous driving tasks and training RL models typically involves in a multi-step process: pre-training RL models on simulators, uploading the pre-trained model to real-life robots, and fine-tuning the weight parameters on robot vehicles. This sequential process is extremely time-consuming and more importantly, knowledge from the fine-tuned model stays local and can not be re-used or leveraged collaboratively. To tackle this problem, we present an online federated RL transfer process for real-time knowledge extraction where all the participant agents make corresponding actions with the knowledge learned by others, even when they are acting in very different environments. To validate the effectiveness of the proposed approach, we constructed a real-life collision avoidance system with Microsoft Airsim simulator and NVIDIA JetsonTX2 car agents, which cooperatively learn from scratch to avoid collisions in indoor environment with obstacle objects. We demonstrate that with the proposed framework, the simulator car agents can transfer knowledge to the RC cars in real-time, with 27% increase in the average distance with obstacles and 42% decrease in the collision counts.


Object Detection under Rainy Conditions for Autonomous Vehicles

Jul 10, 2020
Mazin Hnewa, Hayder Radha

Advanced automotive active-safety systems, in general, and autonomous vehicles, in particular, rely heavily on visual data to classify and localize objects such as pedestrians, traffic signs and lights, and other nearby cars, to assist the corresponding vehicles maneuver safely in their environments. However, the performance of object detection methods could degrade rather significantly under challenging weather scenarios including rainy conditions. Despite major advancements in the development of deraining approaches, the impact of rain on object detection has largely been understudied, especially in the context of autonomous driving. The main objective of this paper is to present a tutorial on state-of-the-art and emerging techniques that represent leading candidates for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects. Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions. Moreover, we survey and evaluate the efficacy and limitations of leading deraining approaches, deep-learning based domain adaptation, and image translation frameworks that are being considered for addressing the problem of object detection under rainy conditions. Experimental results of a variety of the surveyed techniques are presented as part of this tutorial.

* Accepted in IEEE Signal Processing Magazine / Special Issue on Autonomous Driving