Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Monocular Imaging-based Autonomous Tracking for Low-cost Quad-rotor Design - TraQuad

Jan 21, 2018
Lakshmi Shrinivasan, Prasad N R

TraQuad is an autonomous tracking quadcopter capable of tracking any moving (or static) object like cars, humans, other drones or any other object on-the-go. This article describes the applications and advantages of TraQuad and the reduction in cost (to about 250$) that has been achieved so far using the hardware and software capabilities and our custom algorithms wherever needed. This description is backed by strong data and the research analyses which have been drawn out of extant information or conducted on own when necessary. This also describes the development of completely autonomous (even GPS is optional) low-cost drone which can act as a major platform for further developments in automation, transportation, reconnaissance and more. We describe our ROS Gazebo simulator and our STATUS algorithms which form the core of our development of our object tracking drone for generic purposes.

* 19 pages, pre-print, 23 figures (ignoring nested figure count), Journal, robotics, drone, double column, Keywords: drone, image processing, quadcopter, control, machine learning, communication. This is our genuine and humble effort to publish our research on our innovation for which we have spent one and a half years to get the hardware and the custom software right 

nuReality: A VR environment for research of pedestrian and autonomous vehicle interactions

Jan 12, 2022
Paul Schmitt, Nicholas Britten, JiHyun Jeong, Amelia Coffey, Kevin Clark, Shweta Sunil Kothawade, Elena Corina Grigore, Adam Khaw, Christopher Konopka, Linh Pham, Kim Ryan, Christopher Schmitt, Aryaman Pandya, Emilio Frazzoli

We present nuReality, a virtual reality 'VR' environment designed to test the efficacy of vehicular behaviors to communicate intent during interactions between autonomous vehicles 'AVs' and pedestrians at urban intersections. In this project we focus on expressive behaviors as a means for pedestrians to readily recognize the underlying intent of the AV's movements. VR is an ideal tool to use to test these situations as it can be immersive and place subjects into these potentially dangerous scenarios without risk. nuReality provides a novel and immersive virtual reality environment that includes numerous visual details (road and building texturing, parked cars, swaying tree limbs) as well as auditory details (birds chirping, cars honking in the distance, people talking). In these files we present the nuReality environment, its 10 unique vehicle behavior scenarios, and the Unreal Engine and Autodesk Maya source files for each scenario. The files are publicly released as open source at, to support the academic community studying the critical AV-pedestrian interaction.


Unsupervised Abnormality Detection Using Heterogeneous Autonomous Systems

Jun 05, 2020
Sayeed Shafayet Chowdhury, Kazi Mejbaul Islam, Rouhan Noor

Anomaly detection in a surveillance scenario is an emerging and challenging field of research. For autonomous vehicles like drones or cars, it is immensely important to distinguish between normal and abnormal states in real-time to avoid/detect potential threats. But the nature and degree of abnormality may vary depending upon the actual environment and adversary. As a result, it is impractical to model all cases a priori and use supervised methods to classify. Also, an autonomous vehicle provides various data types like images and other analog or digital sensor data. In this paper, a heterogeneous system is proposed which estimates the degree of abnormality of an environment using drone-feed, analyzing real-time image and IMU sensor data in an unsupervised manner. Here, we have demonstrated AngleNet (a novel CNN architecture) to estimate the angle between a normal image and another image under consideration, which provides us with a measure of anomaly. Moreover, the IMU data are used in clustering models to predict abnormality. Finally, the results from these two algorithms are ensembled to estimate the final abnormality. The proposed method performs satisfactorily on the IEEE SP Cup-2020 dataset with an accuracy of 99.92%. Additionally, we have also tested this approach on an in-house dataset to validate its robustness.


Active Safety System for Semi-Autonomous Teleoperated Vehicles

Jun 28, 2021
Smit Saparia, Andreas Schimpe, Laura Ferranti

Autonomous cars can reduce road traffic accidents and provide a safer mode of transport. However, key technical challenges, such as safe navigation in complex urban environments, need to be addressed before deploying these vehicles on the market. Teleoperation can help smooth the transition from human operated to fully autonomous vehicles since it still has human in the loop providing the scope of fallback on driver. This paper presents an Active Safety System (ASS) approach for teleoperated driving. The proposed approach helps the operator ensure the safety of the vehicle in complex environments, that is, avoid collisions with static or dynamic obstacles. Our ASS relies on a model predictive control (MPC) formulation to control both the lateral and longitudinal dynamics of the vehicle. By exploiting the ability of the MPC framework to deal with constraints, our ASS restricts the controller's authority to intervene for lateral correction of the human operator's commands, avoiding counter-intuitive driving experience for the human operator. Further, we design a visual feedback to enhance the operator's trust over the ASS. In addition, we propose an MPC's prediction horizon data based novel predictive display to mitigate the effects of large latency in the teleoperation system. We tested the performance of the proposed approach on a high-fidelity vehicle simulator in the presence of dynamic obstacles and latency.

* Accepted at workshop for Road Vehicle Teleoperation (WS09) at the 2021 IEEE Intelligent Vehicles Symposium (IV21) 

Probabilistic Safety-Assured Adaptive Merging Control for Autonomous Vehicles

Apr 29, 2021
Yiwei Lyu, Wenhao Luo, John M. Dolan

Autonomous vehicles face tremendous challenges while interacting with human drivers in different kinds of scenarios. Developing control methods with safety guarantees while performing interactions with uncertainty is an ongoing research goal. In this paper, we present a real-time safe control framework using bi-level optimization with Control Barrier Function (CBF) that enables an autonomous ego vehicle to interact with human-driven cars in ramp merging scenarios with a consistent safety guarantee. In order to explicitly address motion uncertainty, we propose a novel extension of control barrier functions to a probabilistic setting with provable chance-constrained safety and analyze the feasibility of our control design. The formulated bi-level optimization framework entails first choosing the ego vehicle's optimal driving style in terms of safety and primary objective, and then minimally modifying a nominal controller in the context of quadratic programming subject to the probabilistic safety constraints. This allows for adaptation to different driving strategies with a formally provable feasibility guarantee for the ego vehicle's safe controller. Experimental results are provided to demonstrate the effectiveness of our proposed approach.

* Accepted to ICRA2021 

Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation

May 06, 2022
Maximilian Igl, Daewoo Kim, Alex Kuefler, Paul Mougin, Punit Shah, Kyriacos Shiarlis, Dragomir Anguelov, Mark Palatucci, Brandyn White, Shimon Whiteson

Simulation is a crucial tool for accelerating the development of autonomous vehicles. Making simulation realistic requires models of the human road users who interact with such cars. Such models can be obtained by applying learning from demonstration (LfD) to trajectories observed by cars already on the road. However, existing LfD methods are typically insufficient, yielding policies that frequently collide or drive off the road. To address this problem, we propose Symphony, which greatly improves realism by combining conventional policies with a parallel beam search. The beam search refines these policies on the fly by pruning branches that are unfavourably evaluated by a discriminator. However, it can also harm diversity, i.e., how well the agents cover the entire distribution of realistic behaviour, as pruning can encourage mode collapse. Symphony addresses this issue with a hierarchical approach, factoring agent behaviour into goal generation and goal conditioning. The use of such goals ensures that agent diversity neither disappears during adversarial training nor is pruned away by the beam search. Experiments on both proprietary and open Waymo datasets confirm that Symphony agents learn more realistic and diverse behaviour than several baselines.

* Accepted to ICRA-2022 

How far should self-driving cars see? Effect of observation range on vehicle self-localization

Aug 19, 2019
Mahdi Javanmardi, Ehsan Javanmardi, Shunsuke Kamijo

Accuracy and time efficiency are two essential requirements for the self-localization of autonomous vehicles. While the observation range considered for simultaneous localization and mapping (SLAM) has a significant effect on both accuracy and computation time, its effect is not well investigated in the literature. In this paper, we will answer the question: How far should a driverless car observe during self-localization? We introduce a framework to dynamically define the observation range for localization to meet the accuracy requirement for autonomous driving, while keeping the computation time low. To model the effect of scanning range on the localization accuracy for every point on the map, several map factors were employed. The capability of the proposed framework was verified using field data, demonstrating that it is able to improve the average matching time from 142.2 ms to 39.3 ms while keeping the localization accuracy around 8.1 cm.

* 6 pages, 11 figures, IEEE International Conference on Intelligent Transportation Systems 2019 

Incorporating Voice Instructions in Model-Based Reinforcement Learning for Self-Driving Cars

Jun 21, 2022
Mingze Wang, Ziyang Zhang, Grace Hui Yang

This paper presents a novel approach that supports natural language voice instructions to guide deep reinforcement learning (DRL) algorithms when training self-driving cars. DRL methods are popular approaches for autonomous vehicle (AV) agents. However, most existing methods are sample- and time-inefficient and lack a natural communication channel with the human expert. In this paper, how new human drivers learn from human coaches motivates us to study new ways of human-in-the-loop learning and a more natural and approachable training interface for the agents. We propose incorporating natural language voice instructions (NLI) in model-based deep reinforcement learning to train self-driving cars. We evaluate the proposed method together with a few state-of-the-art DRL methods in the CARLA simulator. The results show that NLI can help ease the training process and significantly boost the agents' learning speed.

* NeurIPS 2021 Workshop on Machine Learning for Autonomous Driving 

An Empirical Testing of Autonomous Vehicle Simulator System for Urban Driving

Sep 10, 2021
John Seymour, Dac-Thanh-Chuong Ho, Quang-Hung Luu

Safety is one of the main challenges that prohibit autonomous vehicles (AV), requiring them to be well tested ahead of being allowed on the road. In comparison with road tests, simulators allow us to validate the AV conveniently and affordably. However, it remains unclear how to best use the AV-based simulator system for testing effectively. Our paper presents an empirical testing of AV simulator system that combines the SVL simulator and the Apollo platform. We propose 576 test cases which are inspired by four naturalistic driving situations with pedestrians and surrounding cars. We found that the SVL can imitate realistic safe and collision situations; and at the same time, Apollo can drive the car quite safely. On the other hand, we noted that the system failed to detect pedestrians or vehicles on the road in three out of four classes, accounting for 10.0% total number of scenarios tested. We further applied metamorphic testing to identify inconsistencies in the system with additional 486 test cases. We then discussed some insights into the scenarios that may cause hazardous situations in real life. In summary, this paper provides a new empirical evidence to strengthen the assertion that the simulator-based system can be an indispensable tool for a comprehensive testing of the AV.

* 8 pages, 8 figures, 4 tables