Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

Aug 14, 2017
Chen Zhou, Jiaolong Yang, Chunshui Zhao, Gang Hua

Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.

* Appeared at IEEE CVPR 2017 Workshop on Embedded Vision 
  
Access Paper or Ask Questions

An LSTM-Based Autonomous Driving Model Using Waymo Open Dataset

Mar 23, 2020
Zhicheng Gu, Zhihao Li, Xuan Di, Rongye Shi

The Waymo Open Dataset has been released recently, providing a platform to crowdsource some fundamental challenges for automated vehicles (AVs), such as 3D detection and tracking. While~the dataset provides a large amount of high-quality and multi-source driving information, people in academia are more interested in the underlying driving policy programmed in Waymo self-driving cars, which is inaccessible due to AV manufacturers' proprietary protection. Accordingly, academic researchers have to make various assumptions to implement AV components in their models or simulations, which may not represent the realistic interactions in real-world traffic. Thus, this paper introduces an approach to learn a long short-term memory (LSTM)-based model for imitating the behavior of Waymo's self-driving model. The proposed model has been evaluated based on Mean Absolute Error (MAE). The experimental results show that our model outperforms several baseline models in driving action prediction. In addition, a visualization tool is presented for verifying the performance of the model.

* Applied Sciences 10(6) 2046, 2020 
  
Access Paper or Ask Questions

Autonomous Road Vehicle Emergency Obstacle Avoidance Maneuver Framework at Highway Speeds

Mar 29, 2022
Evan Lowe, Levent Güvenç

An Autonomous Road Vehicle (ARV) can navigate various types of road networks using inputs such as throttle (acceleration), braking (deceleration), and steering (change of lateral direction). In most ARV driving scenarios that involve normal vehicle traffic and encounters with vulnerable road users (VRUs), ARVs are not required to take evasive action. This paper presents a novel Emergency Obstacle Avoidance Maneuver (EOAM) methodology for ARVs traveling at higher speeds and lower road surface friction, involving time-critical maneuver determination and control. The proposed EOAM Framework offers usage of the ARV's sensing, perception, control, and actuation system abilities as one cohesive system, to accomplish avoidance of an on-road obstacle, based first on performance feasibility and second on passenger comfort, and is designed to be well-integrated within an ARV high-level system. Co-simulation including the ARV EOAM logic in Simulink and a vehicle model in CarSim is conducted with speeds ranging from 55 to 165 km/h and on road surfaces with friction ranging from 1.0 to 0.1. The results are analyzed and given in the context of an entire ARV system, with implications for future work.

* 50 pages, 25 figures, 2 tables 
  
Access Paper or Ask Questions

Deep Reinforcement Learning for Human-Like Driving Policies in Collision Avoidance Tasks of Self-Driving Cars

Jun 19, 2020
Ran Emuna, Avinoam Borowsky, Armin Biess

The technological and scientific challenges involved in the development of autonomous vehicles (AVs) are currently of primary interest for many automobile companies and research labs. However, human-controlled vehicles are likely to remain on the roads for several decades to come and may share with AVs the traffic environments of the future. In such mixed environments, AVs should deploy human-like driving policies and negotiation skills to enable smooth traffic flow. To generate automated human-like driving policies, we introduce a model-free, deep reinforcement learning approach to imitate an experienced human driver's behavior. We study a static obstacle avoidance task on a two-lane highway road in simulation (Unity). Our control algorithm receives a stochastic feedback signal from two sources: a model-driven part, encoding simple driving rules, such as lane-keeping and speed control, and a stochastic, data-driven part, incorporating human expert knowledge from driving data. To assess the similarity between machine and human driving, we model distributions of track position and speed as Gaussian processes. We demonstrate that our approach leads to human-like driving policies.

  
Access Paper or Ask Questions

End-to-end Multi-Modal Multi-Task Vehicle Control for Self-Driving Cars with Visual Perception

Feb 02, 2018
Zhengyuan Yang, Yixuan Zhang, Jerry Yu, Junjie Cai, Jiebo Luo

Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.

* 6 pages, 5 figures 
  
Access Paper or Ask Questions

Emergent Escape-based Flocking Behavior using Multi-Agent Reinforcement Learning

May 10, 2019
Carsten Hahn, Thomy Phan, Thomas Gabor, Lenz Belzner, Claudia Linnhoff-Popien

In nature, flocking or swarm behavior is observed in many species as it has beneficial properties like reducing the probability of being caught by a predator. In this paper, we propose SELFish (Swarm Emergent Learning Fish), an approach with multiple autonomous agents which can freely move in a continuous space with the objective to avoid being caught by a present predator. The predator has the property that it might get distracted by multiple possible preys in its vicinity. We show that this property in interaction with self-interested agents which are trained with reinforcement learning to solely survive as long as possible leads to flocking behavior similar to Boids, a common simulation for flocking behavior. Furthermore we present interesting insights in the swarming behavior and in the process of agents being caught in our modeled environment.

* Accepted at ALIFE 2019 
  
Access Paper or Ask Questions

Benchmarking the Robustness of Semantic Segmentation Models

Aug 14, 2019
Christoph Kamann, Carsten Rother

When designing a semantic segmentation module for a practical application, such as autonomous driving, it is crucial to understand the robustness of the module with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on the state-of-the-art model DeepLabv3$+$. To increase the realism of our study, we utilize almost 200,000 images generated from Cityscapes and PASCAL VOC 2012, and we furthermore present a realistic noise model, imitating HDR camera noise. Based on the benchmark study we gain several new insights. Firstly, model robustness increases with model performance, in most cases. Secondly, some architecture properties affect robustness significantly, such as a Dense Prediction Cell which was designed to maximize performance on clean data only. Thirdly, to achieve good generalization with respect to various types of image noise, it is recommended to train DeepLabv3+ with our realistic noise model.

* 24 pages, 22 figures 
  
Access Paper or Ask Questions

GenRadar: Self-supervised Probabilistic Camera Synthesis based on Radar Frequencies

Jul 19, 2021
Carsten Ditzel, Klaus Dietmayer

Autonomous systems require a continuous and dependable environment perception for navigation and decision-making, which is best achieved by combining different sensor types. Radar continues to function robustly in compromised circumstances in which cameras become impaired, guaranteeing a steady inflow of information. Yet, camera images provide a more intuitive and readily applicable impression of the world. This work combines the complementary strengths of both sensor types in a unique self-learning fusion approach for a probabilistic scene reconstruction in adverse surrounding conditions. After reducing the memory requirements of both high-dimensional measurements through a decoupled stochastic self-supervised compression technique, the proposed algorithm exploits similarities and establishes correspondences between both domains at different feature levels during training. Then, at inference time, relying exclusively on radio frequencies, the model successively predicts camera constituents in an autoregressive and self-contained process. These discrete tokens are finally transformed back into an instructive view of the respective surrounding, allowing to visually perceive potential dangers for important tasks downstream.

* concurrently submitted to IEEE Access 
  
Access Paper or Ask Questions
<<
38
39
40
41
42
43
44
45
>>