Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Monitoring of Perception Systems: Deterministic, Probabilistic, and Learning-based Fault Detection and Identification

May 22, 2022
Pasquale Antonante, Heath Nilsen, Luca Carlone

This paper investigates runtime monitoring of perception systems. Perception is a critical component of high-integrity applications of robotics and autonomous systems, such as self-driving cars. In these applications, failure of perception systems may put human life at risk, and a broad adoption of these technologies requires the development of methodologies to guarantee and monitor safe operation. Despite the paramount importance of perception, currently there is no formal approach for system-level perception monitoring. In this paper, we formalize the problem of runtime fault detection and identification in perception systems and present a framework to model diagnostic information using a diagnostic graph. We then provide a set of deterministic, probabilistic, and learning-based algorithms that use diagnostic graphs to perform fault detection and identification. Moreover, we investigate fundamental limits and provide deterministic and probabilistic guarantees on the fault detection and identification results. We conclude the paper with an extensive experimental evaluation, which recreates several realistic failure modes in the LGSVL open-source autonomous driving simulator, and applies the proposed system monitors to a state-of-the-art autonomous driving software stack (Baidu's Apollo Auto). The results show that the proposed system monitors outperform baselines, have the potential of preventing accidents in realistic autonomous driving scenarios, and incur a negligible computational overhead.

  

MADRaS : Multi Agent Driving Simulator

Oct 02, 2020
Anirban Santara, Sohan Rudra, Sree Aditya Buridi, Meha Kaushik, Abhishek Naik, Bharat Kaul, Balaraman Ravindran

In this work, we present MADRaS, an open-source multi-agent driving simulator for use in the design and evaluation of motion planning algorithms for autonomous driving. MADRaS provides a platform for constructing a wide variety of highway and track driving scenarios where multiple driving agents can train for motion planning tasks using reinforcement learning and other machine learning algorithms. MADRaS is built on TORCS, an open-source car-racing simulator. TORCS offers a variety of cars with different dynamic properties and driving tracks with different geometries and surface properties. MADRaS inherits these functionalities from TORCS and introduces support for multi-agent training, inter-vehicular communication, noisy observations, stochastic actions, and custom traffic cars whose behaviours can be programmed to simulate challenging traffic conditions encountered in the real world. MADRaS can be used to create driving tasks whose complexities can be tuned along eight axes in well-defined steps. This makes it particularly suited for curriculum and continual learning. MADRaS is lightweight and it provides a convenient OpenAI Gym interface for independent control of each car. Apart from the primitive steering-acceleration-brake control mode of TORCS, MADRaS offers a hierarchical track-position -- speed control that can potentially be used to achieve better generalization. MADRaS uses multiprocessing to run each agent as a parallel process for efficiency and integrates well with popular reinforcement learning libraries like RLLib.

  

YouTube AV 50K: An Annotated Corpus for Comments in Autonomous Vehicles

Oct 15, 2018
Tao Li, Lei Lin, Minsoo Choi, Kaiming Fu, Siyuan Gong, Jian Wang

With one billion monthly viewers, and millions of users discussing and sharing opinions, comments below YouTube videos are rich sources of data for opinion mining and sentiment analysis. We introduce the YouTube AV 50K dataset, a freely-available collections of more than 50,000 YouTube comments and metadata below autonomous vehicle (AV)-related videos. We describe its creation process, its content and data format, and discuss its possible usages. Especially, we do a case study of the first self-driving car fatality to evaluate the dataset, and show how we can use this dataset to better understand public attitudes toward self-driving cars and public reactions to the accident. Future developments of the dataset are also discussed.

* in Proceedings of the Thirteenth International Joint Symposium on Artificial Intelligence and Natural Language Processing (iSAI-NLP 2018) 
  

Computer Vision based Accident Detection for Autonomous Vehicles

Dec 20, 2020
Dhananjai Chand, Savyasachi Gupta, Ilaiah Kavati

Numerous Deep Learning and sensor-based models have been developed to detect potential accidents with an autonomous vehicle. However, a self-driving car needs to be able to detect accidents between other vehicles in its path and take appropriate actions such as to slow down or stop and inform the concerned authorities. In this paper, we propose a novel support system for self-driving cars that detects vehicular accidents through a dashboard camera. The system leverages the Mask R-CNN framework for vehicle detection and a centroid tracking algorithm to track the detected vehicle. Additionally, the framework calculates various parameters such as speed, acceleration, and trajectory to determine whether an accident has occurred between any of the tracked vehicles. The framework has been tested on a custom dataset of dashcam footage and achieves a high accident detection rate while maintaining a low false alarm rate.

* Contains 6 pages & 4 figures. Presented in 17th INDICON 2020 
  

End-to-End Velocity Estimation For Autonomous Racing

Mar 15, 2020
Sirish Srinivasan, Inkyu Sa, Alex Zyner, Victor Reijgwart, Miguel I. Valls, Roland Siegwart

Velocity estimation plays a central role in driverless vehicles, but standard and affordable methods struggle to cope with extreme scenarios like aggressive maneuvers due to the presence of high sideslip. To solve this, autonomous race cars are usually equipped with expensive external velocity sensors. In this paper, we present an end-to-end recurrent neural network that takes available raw sensors as input (IMU, wheel odometry, and motor currents) and outputs velocity estimates. The results are compared to two state-of-the-art Kalman filters, which respectively include and exclude expensive velocity sensors. All methods have been extensively tested on a formula student driverless race car with very high sideslip (10{\deg} at the rear axle) and slip ratio (~20%), operating close to the limits of handling. The proposed network is able to estimate lateral velocity up to 15x better than the Kalman filter with the equivalent sensor input and matches (0.06 m/s RMSE) the Kalman filter with the expensive velocity sensor setup.

* Submitted to RA-L + IROS 2020 
  

Crack-pot: Autonomous Road Crack and Pothole Detection

Sep 09, 2018
Sukhad Anand, Saksham Gupta, Vaibhav Darbari, Shivam Kohli

With the advent of self-driving cars and autonomous robots, it is imperative to detect road impairments like cracks and potholes and to perform necessary evading maneuvers to ensure fluid journey for on-board passengers or equipment. We propose a fully autonomous robust real-time road crack and pothole detection algorithm which can be deployed on any GPU based conventional processing boards with an associated camera. The approach is based on a deep neural net architecture which detects cracks and potholes using texture and spatial features. We also propose pre-processing methods which ensure real-time performance. The novelty of the approach lies in using texture- based features to differentiate between crack surfaces and sound roads. The approach performs well in large viewpoint changes, background noise, shadows, and occlusion. The efficacy of the system is shown on standard road crack datasets.

* Submitted at DICTA 2018 
  

GAMMA: A General Agent Motion Prediction Model for Autonomous Driving

Jun 04, 2019
Yuanfu Luo, Panpan Cai

Autonomous driving in mixed traffic requires reliable motion prediction of nearby traffic agents such as pedestrians, bicycles, cars, buses, etc.. This prediction problem is extremely challenging because of the diverse dynamics and geometry of traffic agents, complex road conditions, and intensive interactions between them. In this paper, we proposed GAMMA, a general agent motion prediction model for autonomous driving, that can predict the motion of heterogeneous traffic agents with different kinematics, geometry, etc., and generate multiple hypotheses of trajectories by inferring about human agents' inner states. GAMMA formalizes motion prediction as a geometric optimization problem in the velocity space, and integrates physical constraints and human inner states into this unified framework. Our results show that GAMMA outperforms both traditional and deep learning approaches significantly on diverse real-world datasets.

  

Open-World Active Learning with Stacking Ensemble for Self-Driving Cars

Sep 10, 2021
Paulo R. Vieira, Pedro D. Félix, Luis Macedo

The environments, in which autonomous cars act, are high-risky, dynamic, and full of uncertainty, demanding a continuous update of their sensory information and knowledge bases. The frequency of facing an unknown object is too high making hard the usage of Artificial Intelligence (AI) classical classification models that usually rely on the close-world assumption. This problem of classifying objects in this domain is better faced with and open-world AI approach. We propose an algorithm to identify not only all the known entities that may appear in front of the car, but also to detect and learn the classes of those unknown objects that may be rare to stand on an highway (e.g., a lost box from a truck). Our approach relies on the DOC algorithm from Lei Shu et. al. as well as on the Query-by-Committee algorithm.

  

Range Image-based LiDAR Localization for Autonomous Vehicles

May 25, 2021
Xieyuanli Chen, Ignacio Vizzo, Thomas Läbe, Jens Behley, Cyrill Stachniss

Robust and accurate, map-based localization is crucial for autonomous mobile systems. In this paper, we exploit range images generated from 3D LiDAR scans to address the problem of localizing mobile robots or autonomous cars in a map of a large-scale outdoor environment represented by a triangular mesh. We use the Poisson surface reconstruction to generate the mesh-based map representation. Based on the range images generated from the current LiDAR scan and the synthetic rendered views from the mesh-based map, we propose a new observation model and integrate it into a Monte Carlo localization framework, which achieves better localization performance and generalizes well to different environments. We test the proposed localization approach on multiple datasets collected in different environments with different LiDAR scanners. The experimental results show that our method can reliably and accurately localize a mobile system in different environments and operate online at the LiDAR sensor frame rate to track the vehicle pose.

* Accepted by ICRA 2021. Code: https://github.com/PRBonn/range-mcl. arXiv admin note: text overlap with arXiv:2105.11717 
  

Model Learning and Contextual Controller Tuning for Autonomous Racing

Oct 06, 2021
Lukas P. Fröhlich, Christian Küttel, Elena Arcari, Lukas Hewing, Melanie N. Zeilinger, Andrea Carron

Model predictive control has been widely used in the field of autonomous racing and many data-driven approaches have been proposed to improve the closed-loop performance and to minimize lap time. However, it is often overlooked that a change in the environmental conditions, e.g., when it starts raining, it is not only required to adapt the predictive model but also the controller parameters need to be adjusted. In this paper, we address this challenge with the goal of requiring only few data. The key novelty of the proposed approach is that we leverage the learned dynamics model to encode the environmental condition as context. This insight allows us to employ contextual Bayesian optimization, thus accelerating the controller tuning problem when the environment changes and to transfer knowledge across different cars. The proposed framework is validated on an experimental platform with 1:28 scale RC race cars. We perform an extensive evaluation with more than 2'000 driven laps demonstrating that our approach successfully optimizes the lap time across different contexts faster compared to standard Bayesian optimization.

  
<<
7
8
9
10
11
12
13
14
15
16
17
18
19
>>