Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving

Jun 14, 2019
Yurong You, Yan Wang, Wei-Lun Chao, Divyansh Garg, Geoff Pleiss, Bharath Hariharan, Mark Campbell, Kilian Q. Weinberger

Detecting objects such as cars and pedestrians in 3D plays an indispensable role in autonomous driving. Existing approaches largely rely on expensive LiDAR sensors for accurate depth information. While recently pseudo-LiDAR has been introduced as a promising alternative, at a much lower cost based solely on stereo images, there is still a notable performance gap. In this paper we provide substantial advances to the pseudo-LiDAR framework through improvements in stereo depth estimation. Concretely, we adapt the stereo network architecture and loss function to be more aligned with accurate depth estimation of far away objects (currently the primary weakness of pseudo-LiDAR). Further, we explore the idea to leverage cheaper but extremely sparse LiDAR sensors, which alone provide insufficient information for 3D detection, to de-bias our depth estimation. We propose a depth-propagation algorithm, guided by the initial depth estimates, to diffuse these few exact measurements across the entire depth map. We show on the KITTI object detection benchmark that our combined approach yields substantial improvements in depth estimation and stereo-based 3D object detection --- outperforming the previous state-of-the-art detection accuracy for far-away objects by 40%. Our code will be publicly available at https://github.com/mileyan/Pseudo_Lidar_V2.

  
Access Paper or Ask Questions

Towards Autonomous Reinforcement Learning: Automatic Setting of Hyper-parameters using Bayesian Optimization

May 12, 2018
Juan Cruz Barsce, Jorge A. Palombarini, Ernesto C. Martínez

With the increase of machine learning usage by industries and scientific communities in a variety of tasks such as text mining, image recognition and self-driving cars, automatic setting of hyper-parameter in learning algorithms is a key factor for achieving satisfactory performance regardless of user expertise in the inner workings of the techniques and methodologies. In particular, for a reinforcement learning algorithm, the efficiency of an agent learning a control policy in an uncertain environment is heavily dependent on the hyper-parameters used to balance exploration with exploitation. In this work, an autonomous learning framework that integrates Bayesian optimization with Gaussian process regression to optimize the hyper-parameters of a reinforcement learning algorithm, is proposed. Also, a bandits-based approach to achieve a balance between computational costs and decreasing uncertainty about the Q-values, is presented. A gridworld example is used to highlight how hyper-parameter configurations of a learning algorithm (SARSA) are iteratively improved based on two performance functions.

* Paper submitted to CLEI Electronic Journal. This is an extended version of the conference paper presented at Latin American Computer Conference (CLEI), 2017 
  
Access Paper or Ask Questions

A Novel Traffic Simulation Framework for Testing Autonomous Vehicles Using SUMO and CARLA

Oct 14, 2021
Pei Li, Arpan Kusari, David J. LeBlanc

Traffic simulation is an efficient and cost-effective way to test Autonomous Vehicles (AVs) in a complex and dynamic environment. Numerous studies have been conducted for AV evaluation using traffic simulation over the past decades. However, the current simulation environments fall behind on two fronts -- the background vehicles (BVs) fail to simulate naturalistic driving behavior and the existing environments do not test the entire pipeline in a modular fashion. This study aims to propose a simulation framework that creates a complex and naturalistic traffic environment. Specifically, we combine a modified version of the Simulation of Urban MObility (SUMO) simulator with the Cars Learning to Act (CARLA) simulator to generate a simulation environment that could emulate the complexities of the external environment while providing realistic sensor outputs to the AV pipeline. In a past research work, we created an open-source Python package called SUMO-Gym which generates a realistic road network and naturalistic traffic through SUMO and combines that with OpenAI Gym to provide ease of use for the end user. We propose to extend our developed software by adding CARLA, which in turn will enrich the perception of the ego vehicle by providing realistic sensors outputs of the AVs surrounding environment. Using the proposed framework, AVs perception, planning, and control could be tested in a complex and realistic driving environment. The performance of the proposed framework in constructing output generation and AV evaluations are demonstrated using several case studies.

  
Access Paper or Ask Questions

Neural Network Guided Evolutionary Fuzzing for Finding Traffic Violations of Autonomous Vehicles

Sep 13, 2021
Ziyuan Zhong, Gail Kaiser, Baishakhi Ray

Self-driving cars and trucks, autonomous vehicles (AVs), should not be accepted by regulatory bodies and the public until they have much higher confidence in their safety and reliability -- which can most practically and convincingly be achieved by testing. But existing testing methods are inadequate for checking the end-to-end behaviors of AV controllers against complex, real-world corner cases involving interactions with multiple independent agents such as pedestrians and human-driven vehicles. While test-driving AVs on streets and highways fails to capture many rare events, existing simulation-based testing methods mainly focus on simple scenarios and do not scale well for complex driving situations that require sophisticated awareness of the surroundings. To address these limitations, we propose a new fuzz testing technique, called AutoFuzz, which can leverage widely-used AV simulators' API grammars. to generate semantically and temporally valid complex driving scenarios (sequences of scenes). AutoFuzz is guided by a constrained Neural Network (NN) evolutionary search over the API grammar to generate scenarios seeking to find unique traffic violations. Evaluation of our prototype on one state-of-the-art learning-based controller and two rule-based controllers shows that AutoFuzz efficiently finds hundreds of realistic traffic violations resembling real-world crashes. Further, fine-tuning the learning-based controller with the traffic violations found by AutoFuzz successfully reduced the traffic violations found in the new version of the AV controller software.

  
Access Paper or Ask Questions

On the interaction between Autonomous Mobility-on-Demand systems and the power network: models and coordination algorithms

Jun 27, 2018
Federico Rossi, Ramon Iglesias, Mahnoosh Alizadeh, Marco Pavone

We study the interaction between a fleet of electric, self-driving vehicles servicing on-demand transportation requests (referred to as Autonomous Mobility-on-Demand, or AMoD, system) and the electric power network. We propose a model that captures the coupling between the two systems stemming from the vehicles' charging requirements and captures time-varying customer demand and power generation costs, road congestion, battery depreciation, and power transmission and distribution constraints. We then leverage the model to jointly optimize the operation of both systems. We devise an algorithmic procedure to losslessly reduce the problem size by bundling customer requests, allowing it to be efficiently solved by off-the-shelf linear programming solvers. Next, we show that the socially optimal solution to the joint problem can be enforced as a general equilibrium, and we provide a dual decomposition algorithm that allows self-interested agents to compute the market clearing prices without sharing private information. We assess the performance of the mode by studying a hypothetical AMoD system in Dallas-Fort Worth and its impact on the Texas power network. Lack of coordination between the AMoD system and the power network can cause a 4.4% increase in the price of electricity in Dallas-Fort Worth; conversely, coordination between the AMoD system and the power network could reduce electricity expenditure compared to the case where no cars are present (despite the increased demand for electricity) and yield savings of up $147M/year. Finally, we provide a receding-horizon implementation and assess its performance with agent-based simulations. Collectively, the results of this paper provide a first-of-a-kind characterization of the interaction between electric-powered AMoD systems and the power network, and shed additional light on the economic and societal value of AMoD.

* Extended version of the paper presented at Robotics: Science and Systems XIV, in prep. for journal submission. In V3, we add a proof that the socially-optimal solution can be enforced as a general equilibrium, a privacy-preserving distributed optimization algorithm, a description of the receding-horizon implementation and additional numerical results, and proofs of all theorems 
  
Access Paper or Ask Questions

Large-scale 3D point cloud representations via graph inception networks with applications to autonomous driving

Jun 26, 2019
Siheng Chen, Sufeng. Niu, Tian Lan, Baoan Liu

We present a novel graph-neural-network-based system to effectively represent large-scale 3D point clouds with the applications to autonomous driving. Many previous works studied the representations of 3D point clouds based on two approaches, voxelization, which causes discretization errors and learning, which is hard to capture huge variations in large-scale scenarios. In this work, we combine voxelization and learning: we discretize the 3D space into voxels and propose novel graph inception networks to represent 3D points in each voxel. This combination makes the system avoid discretization errors and work for large-scale scenarios. The entire system for large-scale 3D point clouds acts like the blocked discrete cosine transform for 2D images; we thus call it the point cloud neural transform (PCT). We further apply the proposed PCT to represent real-time LiDAR sweeps produced by self-driving cars and the PCT with graph inception networks significantly outperforms its competitors.

  
Access Paper or Ask Questions

MIDAS: Multi-agent Interaction-aware Decision-making with Adaptive Strategies for Urban Autonomous Navigation

Aug 17, 2020
Xiaoyi Chen, Pratik Chaudhari

Autonomous navigation in crowded, complex urban environments requires interacting with other agents on the road. A common solution to this problem is to use a prediction model to guess the likely future actions of other agents. While this is reasonable, it leads to overly conservative plans because it does not explicitly model the mutual influence of the actions of interacting agents. This paper builds a reinforcement learning-based method named MIDAS where an ego-agent learns to affect the control actions of other cars in urban driving scenarios. MIDAS uses an attention-mechanism to handle an arbitrary number of other agents and includes a ''driver-type'' parameter to learn a single policy that works across different planning objectives. We build a simulation environment that enables diverse interaction experiments with a large number of agents and methods for quantitatively studying the safety, efficiency, and interaction among vehicles. MIDAS is validated using extensive experiments and we show that it (i) can work across different road geometries, (ii) results in an adaptive ego policy that can be tuned easily to satisfy performance criteria such as aggressive or cautious driving, (iii) is robust to changes in the driving policies of external agents, and (iv) is more efficient and safer than existing approaches to interaction-aware decision-making.

* Code available at https://github.com/sherrychen1120/MIDAS 
  
Access Paper or Ask Questions

On the Interaction between Autonomous Mobility on Demand Systems and Power Distribution Networks -- An Optimal Power Flow Approach

May 01, 2019
Alvaro Estandia, Maximilian Schiffer, Federico Rossi, Emre Can Kara, Ram Rajagopal, Marco Pavone

In future transportation systems, the charging behavior of electric Autonomous Mobility on Demand (AMoD) fleets, i.e., fleets of self-driving cars that service on-demand trip requests, will likely challenge power distribution networks (PDNs), causing overloads or voltage drops. In this paper, we show that these challenges can be significantly attenuated if the PDNs' operational constraints and exogenous loads (e.g., from homes or businesses) are considered when operating the electric AMoD fleet. We focus on a system-level perspective, assuming full cooperation between the AMoD and the PDN operators. Through this single entity perspective, we derive an upper bound on the benefits of coordination. We present an optimization-based modeling approach to jointly control an electric AMoD fleet and a series of PDNs, and analyze the benefit of coordination under load balancing constraints. For a case study in Orange County, CA, we show that coordinating the electric AMoD fleet and the PDNs helps to reduce 99% of overloads and 50% of voltage drops which the electric AMoD fleet causes without coordination. Our results show that coordinating electric AMoD and PDNs helps to level loads and can significantly postpone the point at which upgrading the network's capacity to a larger scale becomes inevitable to preserve stability.

  
Access Paper or Ask Questions

Simulating LIDAR Point Cloud for Autonomous Driving using Real-world Scenes and Traffic Flows

Nov 17, 2018
Jin Fang, Feilong Yan, Tongtong Zhao, Feihu Zhang, Dingfu Zhou, Ruigang Yang, Yu Ma, Liang Wang

We present a LIDAR simulation framework that can automatically generate 3D point cloud based on LIDAR type and placement. The point cloud, annotated with ground truth semantic labels, is to be used as training data to improve environmental perception capabilities for autonomous driving vehicles. Different from previous simulators, we generate the point cloud based on real environment and real traffic flow. More specifically we employ a mobile LIDAR scanner with cameras to capture real world scenes. The input to our simulation framework includes dense 3D point cloud and registered color images. Moving objects (such as cars, pedestrians, bicyclists) are automatically identified and recorded. These objects are then removed from the input point cloud to restore a static background (e.g., environment without movable objects). With that we can insert synthetic models of various obstacles, such as vehicles and pedestrians in the static background to create various traffic scenes. A novel LIDAR renderer takes the composite scene to generate new realistic LIDAR points that are already annotated at point level for synthetic objects. Experimental results show that our system is able to close the performance gap between simulation and real data to be 1 ~ 6% in different applications, and for model fine tuning, only 10% ~ 20% extra real data could help to outperform the original model trained with full real dataset.

* 7 pages 
  
Access Paper or Ask Questions

Autonomous drone cinematographer: Using artistic principles to create smooth, safe, occlusion-free trajectories for aerial filming

Aug 28, 2018
Rogerio Bonatti, Yanfu Zhang, Sanjiban Choudhury, Wenshan Wang, Sebastian Scherer

Autonomous aerial cinematography has the potential to enable automatic capture of aesthetically pleasing videos without requiring human intervention, empowering individuals with the capability of high-end film studios. Current approaches either only handle off-line trajectory generation, or offer strategies that reason over short time horizons and simplistic representations for obstacles, which result in jerky movement and low real-life applicability. In this work we develop a method for aerial filming that is able to trade off shot smoothness, occlusion, and cinematography guidelines in a principled manner, even under noisy actor predictions. We present a novel algorithm for real-time covariant gradient descent that we use to efficiently find the desired trajectories by optimizing a set of cost functions. Experimental results show that our approach creates attractive shots, avoiding obstacles and occlusion 65 times over 1.25 hours of flight time, re-planning at 5 Hz with a 10 s time horizon. We robustly film human actors, cars and bicycles performing different motion among obstacles, using various shot types.

  
Access Paper or Ask Questions
<<
19
20
21
22
23
24
25
26
27
28
29
30
31
>>