Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Deep, spatially coherent Inverse Sensor Models with Uncertainty Incorporation using the evidential Framework

Mar 29, 2019
Daniel Bauer, Lars Kuhnert, Lutz Eckstein

To perform high speed tasks, sensors of autonomous cars have to provide as much information in as few time steps as possible. However, radars, one of the sensor modalities autonomous cars heavily rely on, often only provide sparse, noisy detections. These have to be accumulated over time to reach a high enough confidence about the static parts of the environment. For radars, the state is typically estimated by accumulating inverse detection models (IDMs). We employ the recently proposed evidential convolutional neural networks which, in contrast to IDMs, compute dense, spatially coherent inference of the environment state. Moreover, these networks are able to incorporate sensor noise in a principled way which we further extend to also incorporate model uncertainty. We present experimental results that show This makes it possible to obtain a denser environment perception in fewer time steps.

* Submitted for Intelligent Vehicle Symposium 2019 
  
Access Paper or Ask Questions

KittingBot: A Mobile Manipulation Robot for Collaborative Kitting in Automotive Logistics

Sep 14, 2018
Dmytro Pavlichenko, Germán Martín García, Seongyong Koo, Sven Behnke

Individualized manufacturing of cars requires kitting: the collection of individual sets of part variants for each car. This challenging logistic task is frequently performed manually by warehouseman. We propose a mobile manipulation robotic system for autonomous kitting, building on the Kuka Miiwa platform which consists of an omnidirectional base, a 7 DoF collaborative iiwa manipulator, cameras, and distance sensors. Software modules for detection and pose estimation of transport boxes, part segmentation in these containers, recognition of part variants, grasp generation, and arm trajectory optimization have been developed and integrated. Our system is designed for collaborative kitting, i.e. some parts are collected by warehouseman while other parts are picked by the robot. To address safe human-robot collaboration, fast arm trajectory replanning considering previously unforeseen obstacles is realized. The developed system was evaluated in the European Robotics Challenge 2, where the Miiwa robot demonstrated autonomous kitting, part variant recognition, and avoidance of unforeseen obstacles.

* Accepted and published at IAS-15 (http://conference.vde.com/ias/Pages/Homepage.aspx
  
Access Paper or Ask Questions

Efficient Autonomy Validation in Simulation with Adaptive Stress Testing

Jul 16, 2019
Mark Koren, Mykel Kochenderfer

During the development of autonomous systems such as driverless cars, it is important to characterize the scenarios that are most likely to result in failure. Adaptive Stress Testing (AST) provides a way to search for the most-likely failure scenario as a Markov decision process (MDP). Our previous work used a deep reinforcement learning (DRL) solver to identify likely failure scenarios. However, the solver's use of a feed-forward neural network with a discretized space of possible initial conditions poses two major problems. First, the system is not treated as a black box, in that it requires analyzing the internal state of the system, which leads to considerable implementation complexities. Second, in order to simulate realistic settings, a new instance of the solver needs to be run for each initial condition. Running a new solver for each initial condition not only significantly increases the computational complexity, but also disregards the underlying relationship between similar initial conditions. We provide a solution to both problems by employing a recurrent neural network that takes a set of initial conditions from a continuous space as input. This approach enables robust and efficient detection of failures because the solution generalizes across the entire space of initial conditions. By simulating an instance where an autonomous car drives while a pedestrian is crossing a road, we demonstrate the solver is now capable of finding solutions for problems that would have previously been intractable.

* Submitted to IEEE ITSC 2019 
  
Access Paper or Ask Questions

A Hierarchical Deep Architecture and Mini-Batch Selection Method For Joint Traffic Sign and Light Detection

Sep 13, 2018
Alex D. Pon, Oles Andrienko, Ali Harakeh, Steven L. Waslander

Traffic light and sign detectors on autonomous cars are integral for road scene perception. The literature is abundant with deep learning networks that detect either lights or signs, not both, which makes them unsuitable for real-life deployment due to the limited graphics processing unit (GPU) memory and power available on embedded systems. The root cause of this issue is that no public dataset contains both traffic light and sign labels, which leads to difficulties in developing a joint detection framework. We present a deep hierarchical architecture in conjunction with a mini-batch proposal selection mechanism that allows a network to detect both traffic lights and signs from training on separate traffic light and sign datasets. Our method solves the overlapping issue where instances from one dataset are not labelled in the other dataset. We are the first to present a network that performs joint detection on traffic lights and signs. We measure our network on the Tsinghua-Tencent 100K benchmark for traffic sign detection and the Bosch Small Traffic Lights benchmark for traffic light detection and show it outperforms the existing Bosch Small Traffic light state-of-the-art method. We focus on autonomous car deployment and show our network is more suitable than others because of its low memory footprint and real-time image processing time. Qualitative results can be viewed at https://youtu.be/_YmogPzBXOw

* Accepted in the IEEE 15th Conference on Computer and Robot Vision 
  
Access Paper or Ask Questions

Parallelized and Randomized Adversarial Imitation Learning for Safety-Critical Self-Driving Vehicles

Dec 26, 2021
Won Joon Yun, MyungJae Shin, Soyi Jung, Sean Kwon, Joongheon Kim

Self-driving cars and autonomous driving research has been receiving considerable attention as major promising prospects in modern artificial intelligence applications. According to the evolution of advanced driver assistance system (ADAS), the design of self-driving vehicle and autonomous driving systems becomes complicated and safety-critical. In general, the intelligent system simultaneously and efficiently activates ADAS functions. Therefore, it is essential to consider reliable ADAS function coordination to control the driving system, safely. In order to deal with this issue, this paper proposes a randomized adversarial imitation learning (RAIL) algorithm. The RAIL is a novel derivative-free imitation learning method for autonomous driving with various ADAS functions coordination; and thus it imitates the operation of decision maker that controls autonomous driving with various ADAS functions. The proposed method is able to train the decision maker that deals with the LIDAR data and controls the autonomous driving in multi-lane complex highway environments. The simulation-based evaluation verifies that the proposed method achieves desired performance.

* 13 pages, 8 figures 
  
Access Paper or Ask Questions

Formal Verification of End-to-End Learning in Cyber-Physical Systems: Progress and Challenges

Jun 15, 2020
Nathan Fulton, Nathan Hunt, Nghia Hoang, Subhro Das

Autonomous systems -- such as self-driving cars, autonomous drones, and automated trains -- must come with strong safety guarantees. Over the past decade, techniques based on formal methods have enjoyed some success in providing strong correctness guarantees for large software systems including operating system kernels, cryptographic protocols, and control software for drones. These successes suggest it might be possible to ensure the safety of autonomous systems by constructing formal, computer-checked correctness proofs. This paper identifies three assumptions underlying existing formal verification techniques, explains how each of these assumptions limits the applicability of verification in autonomous systems, and summarizes preliminary work toward improving the strength of evidence provided by formal verification.

* 7 pages, 4 figures. NeurIPS Workshop on Safety and Robustness in Decision Making, 2019 
  
Access Paper or Ask Questions

The Autonomous Racing Software Stack of the KIT19d

Oct 06, 2020
Sherif Nekkah, Josua Janus, Mario Boxheimer, Lars Ohnemus, Stefan Hirsch, Benjamin Schmidt, Yuchen Liu, David Borbély, Florian Keck, Katharina Bachmann, Lukasz Bleszynski

Formula Student Driverless challenges engineering students to develop autonomous single-seater race cars in a quest to bring about more graduates who are well-prepared to solve the real world problems associated with autonomous driving. In this paper, we present the software stack of KA-RaceIng's entry to the 2019 competitions. We cover the essential modules of the system, including perception, localization, mapping, motion planning, and control. Furthermore, development methods are outlined and an overview of the system architecture is given. We conclude by presenting selected runtime measurements, data logs, and competition results to provide an insight into the performance of the final prototype.

* 11 pages, 10 figures, 1 table 
  
Access Paper or Ask Questions

Just Go with the Flow: Self-Supervised Scene Flow Estimation

Dec 01, 2019
Himangi Mittal, Brian Okorn, David Held

When interacting with highly dynamic environments, scene flow allows autonomous systems to reason about the non-rigid motion of multiple independent objects. This is of particular interest in the field of autonomous driving, in which many cars, people, bicycles, and other objects need to be accurately tracked. Current state of the art methods require annotated scene flow data from autonomous driving scenes to train scene flow networks with supervised learning. As an alternative, we present a method of training scene flow that uses two self-supervised losses, based on nearest neighbors and cycle consistency. These self-supervised losses allow us to train our method on large unlabeled autonomous driving datasets; the resulting method matches current state-of-the-art supervised performance using no real world annotations and exceeds state-of-the-art performance when combining our self-supervised approach with supervised learning on a smaller labeled dataset.

  
Access Paper or Ask Questions

A Fleet of Miniature Cars for Experiments in Cooperative Driving

Feb 16, 2019
Nicholas Hyldmar, Yijun He, Amanda Prorok

We introduce a unique experimental testbed that consists of a fleet of 16 miniature Ackermann-steering vehicles. We are motivated by a lack of available low-cost platforms to support research and education in multi-car navigation and trajectory planning. This article elaborates the design of our miniature robotic car, the Cambridge Minicar, as well as the fleet's control architecture. Our experimental testbed allows us to implement state-of-the-art driver models as well as autonomous control strategies, and test their validity in a real, physical multi-lane setup. Through experiments on our miniature highway, we are able to tangibly demonstrate the benefits of cooperative driving on multi-lane road topographies. Our setup paves the way for indoor large-fleet experimental research.

* Accepted to ICRA 2019 
  
Access Paper or Ask Questions

Towards safe, explainable, and regulated autonomous driving

Nov 20, 2021
Shahin Atakishiyev, Mohammad Salameh, Hengshuai Yao, Randy Goebel

There has been growing interest in the development and deployment of autonomous vehicles on modern road networks over the last few years, encouraged by the empirical successes of powerful artificial intelligence approaches (AI), especially in the applications of deep and reinforcement learning. However, there have been several road accidents with ``autonomous'' cars that prevent this technology from being publicly acceptable at a wider level. As AI is the main driving force behind the intelligent navigation systems of such vehicles, both the stakeholders and transportation jurisdictions require their AI-driven software architecture to be safe, explainable, and regulatory compliant. We present a framework that integrates autonomous control, explainable AI architecture, and regulatory compliance to address this issue and further provide several conceptual models from this perspective, to help guide future research directions.

* 6 pages 
  
Access Paper or Ask Questions
<<
6
7
8
9
10
11
12
13
14
15
16
17
18
>>