Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments

Jun 12, 2022
Linh Kästner, Teham Bhuiyan, Tuan Anh Le, Elias Treis, Johannes Cox, Boris Meinardus, Jacek Kmiecik, Reyk Carstens, Duc Pichel, Bassel Fatloun, Niloufar Khorsandi, Jens Lambrecht

The ability to autonomously navigate safely, especially within dynamic environments, is paramount for mobile robotics. In recent years, DRL approaches have shown superior performance in dynamic obstacle avoidance. However, these learning-based approaches are often developed in specially designed simulation environments and are hard to test against conventional planning approaches. Furthermore, the integration and deployment of these approaches into real robotic platforms are not yet completely solved. In this paper, we present Arena-bench, a benchmark suite to train, test, and evaluate navigation planners on different robotic platforms within 3D environments. It provides tools to design and generate highly dynamic evaluation worlds, scenarios, and tasks for autonomous navigation and is fully integrated into the robot operating system. To demonstrate the functionalities of our suite, we trained a DRL agent on our platform and compared it against a variety of existing different model-based and learning-based navigation approaches on a variety of relevant metrics. Finally, we deployed the approaches towards real robots and demonstrated the reproducibility of the results. The code is publicly available at github.com/ignc-research/arena-bench.

* Accepted for publication in Robotics and Automation Letters (RA-L), 2022, 8 pages, 6 figures 
  

Unsupervised adaptation of brain machine interface decoders

Jun 16, 2012
Tayfun Gürel, Carsten Mehring

The performance of neural decoders can degrade over time due to nonstationarities in the relationship between neuronal activity and behavior. In this case, brain-machine interfaces (BMI) require adaptation of their decoders to maintain high performance across time. One way to achieve this is by use of periodical calibration phases, during which the BMI system (or an external human demonstrator) instructs the user to perform certain movements or behaviors. This approach has two disadvantages: (i) calibration phases interrupt the autonomous operation of the BMI and (ii) between two calibration phases the BMI performance might not be stable but continuously decrease. A better alternative would be that the BMI decoder is able to continuously adapt in an unsupervised manner during autonomous BMI operation, i.e. without knowing the movement intentions of the user. In the present article, we present an efficient method for such unsupervised training of BMI systems for continuous movement control. The proposed method utilizes a cost function derived from neuronal recordings, which guides a learning algorithm to evaluate the decoding parameters. We verify the performance of our adaptive method by simulating a BMI user with an optimal feedback control model and its interaction with our adaptive BMI decoder. The simulation results show that the cost function and the algorithm yield fast and precise trajectories towards targets at random orientations on a 2-dimensional computer screen. For initially unknown and non-stationary tuning parameters, our unsupervised method is still able to generate precise trajectories and to keep its performance stable in the long term. The algorithm can optionally work also with neuronal error signals instead or in conjunction with the proposed unsupervised adaptation.

* 28 pages, 13 figures, submitted to Frontiers in Neuroprosthetics 
  

Autonomous Racing using Learning Model Predictive Control

Nov 09, 2017
Ugo Rosolia, Ashwin Carvalho, Francesco Borrelli

A novel learning Model Predictive Control technique is applied to the autonomous racing problem. The goal of the controller is to minimize the time to complete a lap. The proposed control strategy uses the data from previous laps to improve its performance while satisfying safety requirements. Moreover, a system identification technique is proposed to estimate the vehicle dynamics. Simulation results with the high fidelity simulator software CarSim show the effectiveness of the proposed control scheme.

* Extended version of the paper accepted to ACC 
  

Exploring the Limitations of Behavior Cloning for Autonomous Driving

Apr 18, 2019
Felipe Codevilla, Eder Santana, Antonio M. López, Adrien Gaidon

Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, including in unseen environments, executing complex lateral and longitudinal maneuvers without these reactions being explicitly programmed. However, we confirm well-known limitations (due to dataset bias and overfitting), new generalization issues (due to dynamic objects and the lack of a causal model), and training instability requiring further research before behavior cloning can graduate to real-world driving. The code of the studied behavior cloning approaches can be found at https://github.com/felipecode/coiltraine .

  

Vehicular Teamwork: Collaborative localization of Autonomous Vehicles

Apr 29, 2021
Jacob Hartzer, Srikanth Saripalli

This paper develops a distributed collaborative localization algorithm based on an extended kalman filter. This algorithm incorporates Ultra-Wideband (UWB) measurements for vehicle to vehicle ranging, and shows improvements in localization accuracy where GPS typically falls short. The algorithm was first tested in a newly created open-source simulation environment that emulates various numbers of vehicles and sensors while simultaneously testing multiple localization algorithms. Predicted error distributions for various algorithms are quickly producible using the Monte-Carlo method and optimization techniques within MatLab. The simulation results were validated experimentally in an outdoor, urban environment. Improvements of localization accuracy over a typical extended kalman filter ranged from 2.9% to 9.3% over 180 meter test runs. When GPS was denied, these improvements increased up to 83.3% over a standard kalman filter. In both simulation and experimentally, the DCL algorithm was shown to be a good approximation of a full state filter, while reducing required communication between vehicles. These results are promising in showing the efficacy of adding UWB ranging sensors to cars for collaborative and landmark localization, especially in GPS-denied environments. In the future, additional moving vehicles with additional tags will be tested in other challenging GPS denied environments.

  

Development of Open Informal Dataset Affecting Autonomous Driving

Oct 14, 2020
Yong-Gu Lee, Seong-Jae Lee, Sang-Jin Lee, Tae-Seung Baek, Dong-Whan Lee, Kyeong-Chan Jang, Ho-Jin Sohn, Jin-Soo Kim

This document is a document that has written procedures and methods for collecting objects and unstructured dynamic data on the road for the development of object recognition technology for self-driving cars, and outlines the methods of collecting data, annotation data, object classifier criteria, and data processing methods. On-road object and unstructured dynamic data were collected in various environments, such as weather, time and traffic conditions, and additional reception calls for police and safety personnel were collected. Finally, 100,000 images of various objects existing on pedestrians and roads, 200,000 images of police and traffic safety personnel, 5,000 images of police and traffic safety personnel, and data sets consisting of 5,000 image data were collected and built.

* 26 pages, 16 figures 
  

Autonomous navigation for low-altitude UAVs in urban areas

Feb 25, 2016
Thomas Castelli, Aidean Sharghi, Don Harper, Alain Tremeau, Mubarak Shah

In recent years, consumer Unmanned Aerial Vehicles have become very popular, everyone can buy and fly a drone without previous experience, which raises concern in regards to regulations and public safety. In this paper, we present a novel approach towards enabling safe operation of such vehicles in urban areas. Our method uses geodetically accurate dataset images with Geographical Information System (GIS) data of road networks and buildings provided by Google Maps, to compute a weighted A* shortest path from start to end locations of a mission. Weights represent the potential risk of injuries for individuals in all categories of land-use, i.e. flying over buildings is considered safer than above roads. We enable safe UAV operation in regards to 1- land-use by computing a static global path dependent on environmental structures, and 2- avoiding flying over moving objects such as cars and pedestrians by dynamically optimizing the path locally during the flight. As all input sources are first geo-registered, pixels and GPS coordinates are equivalent, it therefore allows us to generate an automated and user-friendly mission with GPS waypoints readable by consumer drones' autopilots. We simulated 54 missions and show significant improvement in maximizing UAV's standoff distance to moving objects with a quantified safety parameter over 40 times better than the naive straight line navigation.

  

Trusted Neural Networks for Safety-Constrained Autonomous Control

May 18, 2018
Shalini Ghosh, Amaury Mercier, Dheeraj Pichapati, Susmit Jha, Vinod Yegneswaran, Patrick Lincoln

We propose Trusted Neural Network (TNN) models, which are deep neural network models that satisfy safety constraints critical to the application domain. We investigate different mechanisms for incorporating rule-based knowledge in the form of first-order logic constraints into a TNN model, where rules that encode safety are accompanied by weights indicating their relative importance. This framework allows the TNN model to learn from knowledge available in form of data as well as logical rules. We propose multiple approaches for solving this problem: (a) a multi-headed model structure that allows trade-off between satisfying logical constraints and fitting training data in a unified training framework, and (b) creating a constrained optimization problem and solving it in dual formulation by posing a new constrained loss function and using a proximal gradient descent algorithm. We demonstrate the efficacy of our TNN framework through experiments using the open-source TORCS~\cite{BernhardCAA15} 3D simulator for self-driving cars. Experiments using our first approach of a multi-headed TNN model, on a dataset generated by a customized version of TORCS, show that (1) adding safety constraints to a neural network model results in increased performance and safety, and (2) the improvement increases with increasing importance of the safety constraints. Experiments were also performed using the second approach of proximal algorithm for constrained optimization --- they demonstrate how the proposed method ensures that (1) the overall TNN model satisfies the constraints even when the training data violates some of the constraints, and (2) the proximal gradient descent algorithm on the constrained objective converges faster than the unconstrained version.

  

A Convolutional Neural Network Approach Towards Self-Driving Cars

Sep 09, 2019
Akhil Agnihotri, Prathamesh Saraf, Kriti Rajesh Bapnad

A convolutional neural network (CNN) approach is used to implement a level 2 autonomous vehicle by mapping pixels from the camera input to the steering commands. The network automatically learns the maximum variable features from the camera input, hence requires minimal human intervention. Given realistic frames as input, the driving policy trained on the dataset by NVIDIA and Udacity can adapt to real-world driving in a controlled environment. The CNN is tested on the CARLA open-source driving simulator. Details of a beta-testing platform are also presented, which consists of an ultrasonic sensor for obstacle detection and an RGBD camera for real-time position monitoring at 10Hz. Arduino Mega and Raspberry Pi are used for motor control and processing respectively to output the steering angle, which is converted to angular velocity for steering.

* 4 pages, 7 figures 
  

Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control

Nov 16, 2018
Rhiannon Michelmore, Marta Kwiatkowska, Yarin Gal

A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains. One such domain, self-driving, has benefited from significant performance improvements, with millions of miles having been driven with no human intervention. Despite this, crashes and erroneous behaviours still occur, in part due to the complexity of verifying the correctness of DNNs and a lack of safety guarantees. In this paper, we demonstrate how quantitative measures of uncertainty can be extracted in real-time, and their quality evaluated in end-to-end controllers for self-driving cars. To this end we utilise a recent method for gathering approximate uncertainty information from DNNs without changing the network's architecture. We propose evaluation techniques for the uncertainty on two separate architectures which use the uncertainty to predict crashes up to five seconds in advance. We find that mutual information, a measure of uncertainty in classification networks, is a promising indicator of forthcoming crashes.

* 7 pages, 6 figures 
  
<<
38
39
40
41
42
43
44
45
46
>>