Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"autonomous cars": models, code, and papers

Personal space of autonomous car's passengers sitting in the driver's seat

May 09, 2018
Eleonore Ferrier-Barbut, Dominique Vaufreydaz, Jean-Alix David, Jérôme Lussereau, Anne Spalanzani

This article deals with the specific context of an autonomous car navigating in an urban center within a shared space between pedestrians and cars. The driver delegates the control to the autonomous system while remaining seated in the driver's seat. The proposed study aims at giving a first insight into the definition of human perception of space applied to vehicles by testing the existence of a personal space around the car.It aims at measuring proxemic information about the driver's comfort zone in such conditions.Proxemics, or human perception of space, has been largely explored when applied to humans or to robots, leading to the concept of personal space, but poorly when applied to vehicles. In this article, we highlight the existence and the characteristics of a zone of comfort around the car which is not correlated to the risk of a collision between the car and other road users. Our experiment includes 19 volunteers using a virtual reality headset to look at 30 scenarios filmed in 360{\textdegree} from the point of view of a passenger sitting in the driver's seat of an autonomous car.They were asked to say "stop" when they felt discomfort visualizing the scenarios.As said, the scenarios voluntarily avoid collision effect as we do not want to measure fear but discomfort.The scenarios involve one or three pedestrians walking past the car at different distances from the wings of the car, relative to the direction of motion of the car, on both sides. The car is either static or moving straight forward at different speeds.The results indicate the existence of a comfort zone around the car in which intrusion causes discomfort.The size of the comfort zone is sensitive neither to the side of the car where the pedestrian passes nor to the number of pedestrians. In contrast, the feeling of discomfort is relative to the car's motion (static or moving).Another outcome from this study is an illustration of the usage of first person 360{\textdegree} video and a virtual reality headset to evaluate feelings of a passenger within an autonomous car.

* The 2018 IEEE Intelligent Vehicles Symposium (IV'18), Jun 2018, Changshu, Suzhou, China 
  
Access Paper or Ask Questions

Adversarial Deep Reinforcement Learning for Trustworthy Autonomous Driving Policies

Dec 22, 2021
Aizaz Sharif, Dusica Marijan

Deep reinforcement learning is widely used to train autonomous cars in a simulated environment. Still, autonomous cars are well known for being vulnerable when exposed to adversarial attacks. This raises the question of whether we can train the adversary as a driving agent for finding failure scenarios in autonomous cars, and then retrain autonomous cars with new adversarial inputs to improve their robustness. In this work, we first train and compare adversarial car policy on two custom reward functions to test the driving control decision of autonomous cars in a multi-agent setting. Second, we verify that adversarial examples can be used not only for finding unwanted autonomous driving behavior, but also for helping autonomous driving cars in improving their deep reinforcement learning policies. By using a high fidelity urban driving simulation environment and vision-based driving agents, we demonstrate that the autonomous cars retrained using the adversary player noticeably increase the performance of their driving policies in terms of reducing collision and offroad steering errors.

  
Access Paper or Ask Questions

What might matter in autonomous cars adoption: first person versus third person scenarios

Oct 17, 2018
Eva Zackova, Jan Romportl

The discussion between the automotive industry, governments, ethicists, policy makers and general public about autonomous cars' moral agency is widening, and therefore we see the need to bring more insight into what meta-factors might actually influence the outcomes of such discussions, surveys and plebiscites. In our study, we focus on the psychological (personality traits), practical (active driving experience), gender and rhetoric/framing factors that might impact and even determine respondents' a priori preferences of autonomous cars' operation. We conducted an online survey (N=430) to collect data that show that the third person scenario is less biased than the first person scenario when presenting ethical dilemma related to autonomous cars. According to our analysis, gender bias should be explored in more extensive future studies as well. We recommend any participatory technology assessment discourse to use the third person scenario and to direct attention to the way any autonomous car related debate is introduced, especially in terms of linguistic and communication aspects and gender.

  
Access Paper or Ask Questions

Future Intelligent Autonomous Robots, Ethical by Design. Learning from Autonomous Cars Ethics

Jul 16, 2021
Gordana Dodig-Crnkovic, Tobias Holstein, Patrizio Pelliccione

Development of the intelligent autonomous robot technology presupposes its anticipated beneficial effect on the individuals and societies. In the case of such disruptive emergent technology, not only questions of how to build, but also why to build and with what consequences are important. The field of ethics of intelligent autonomous robotic cars is a good example of research with actionable practical value, where a variety of stakeholders, including the legal system and other societal and governmental actors, as well as companies and businesses, collaborate bringing about shared view of ethics and societal aspects of technology. It could be used as a starting platform for the approaches to the development of intelligent autonomous robots in general, considering human-machine interfaces in different phases of the life cycle of technology - the development, implementation, testing, use and disposal. Drawing from our work on ethics of autonomous intelligent robocars, and the existing literature on ethics of robotics, our contribution consists of a set of values and ethical principles with identified challenges and proposed approaches for meeting them. This may help stakeholders in the field of intelligent autonomous robotics to connect ethical principles with their applications. Our recommendations of ethical requirements for autonomous cars can be used for other types of intelligent autonomous robots, with the caveat for social robots that require more research regarding interactions with the users. We emphasize that existing ethical frameworks need to be applied in a context-sensitive way, by assessments in interdisciplinary, multi-competent teams through multi-criteria analysis. Furthermore, we argue for the need of a continuous development of ethical principles, guidelines, and regulations, informed by the progress of technologies and involving relevant stakeholders.

* 11 pages, 1 figure, 1 table 
  
Access Paper or Ask Questions

Courteous Autonomous Cars

Aug 16, 2018
Liting Sun, Wei Zhan, Masayoshi Tomizuka, Anca D. Dragan

Typically, autonomous cars optimize for a combination of safety, efficiency, and driving quality. But as we get better at this optimization, we start seeing behavior go from too conservative to too aggressive. The car's behavior exposes the incentives we provide in its cost function. In this work, we argue for cars that are not optimizing a purely selfish cost, but also try to be courteous to other interactive drivers. We formalize courtesy as a term in the objective that measures the increase in another driver's cost induced by the autonomous car's behavior. Such a courtesy term enables the robot car to be aware of possible irrationality of the human behavior, and plan accordingly. We analyze the effect of courtesy in a variety of scenarios. We find, for example, that courteous robot cars leave more space when merging in front of a human driver. Moreover, we find that such a courtesy term can help explain real human driver behavior on the NGSIM dataset.

* International Conference on Intelligent Robots (IROS) 2018 
  
Access Paper or Ask Questions

Evaluating the Robustness of Deep Reinforcement Learning for Autonomous and Adversarial Policies in a Multi-agent Urban Driving Environment

Dec 22, 2021
Aizaz Sharif, Dusica Marijan

Deep reinforcement learning is actively used for training autonomous driving agents in a vision-based urban simulated environment. Due to the large availability of various reinforcement learning algorithms, we are still unsure of which one works better while training autonomous cars in single-agent as well as multi-agent driving environments. A comparison of deep reinforcement learning in vision-based autonomous driving will open up the possibilities for training better autonomous car policies. Also, autonomous cars trained on deep reinforcement learning-based algorithms are known for being vulnerable to adversarial attacks, and we have less information on which algorithms would act as a good adversarial agent. In this work, we provide a systematic evaluation and comparative analysis of 6 deep reinforcement learning algorithms for autonomous and adversarial driving in four-way intersection scenario. Specifically, we first train autonomous cars using state-of-the-art deep reinforcement learning algorithms. Second, we test driving capabilities of the trained autonomous policies in single-agent as well as multi-agent scenarios. Lastly, we use the same deep reinforcement learning algorithms to train adversarial driving agents, in order to test the driving performance of autonomous cars and look for possible collision and offroad driving scenarios. We perform experiments by using vision-only high fidelity urban driving simulated environments.

  
Access Paper or Ask Questions

Computer Stereo Vision for Autonomous Driving

Dec 06, 2020
Rui Fan, Li Wang, Mohammud Junaid Bocus, Ioannis Pitas

For a long time, autonomous cars were found only in science fiction movies and series but now they are becoming a reality. The opportunity to pick such a vehicle at your garage forecourt is closer than you think. As an important component of autonomous systems, autonomous car perception has had a big leap with recent advances in parallel computing architectures, such as OpenMP for multi-threading CPUs and OpenCL for GPUs. With the use of tiny but full-feature embedded supercomputers, computer stereo vision has been prevalently applied in autonomous cars for depth perception. The two key aspects of computer stereo vision are speed and accuracy. They are two desirable but conflicting properties -- the algorithms with better disparity accuracy usually have higher computational complexity. Therefore, the main aim of developing a computer stereo vision algorithm for resource-limited hardware is to improve the trade-off between speed and accuracy. In this chapter, we first introduce the autonomous car system, from the hardware aspect to the software aspect. We then discuss four autonomous car perception functionalities, including: 1) visual feature detection, description and matching, 2) 3D information acquisition, 3) object detection/recognition and 4) semantic image segmentation. Finally, we introduce the principles of computer stereo vision and parallel computing.

* Book chapter 
  
Access Paper or Ask Questions

DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars

Mar 20, 2018
Yuchi Tian, Kexin Pei, Suman Jana, Baishakhi Ray

Recent advances in Deep Neural Networks (DNNs) have led to the development of DNN-driven autonomous cars that, using sensors like camera, LiDAR, etc., can drive without any human intervention. Most major manufacturers including Tesla, GM, Ford, BMW, and Waymo/Google are working on building and testing different types of autonomous vehicles. The lawmakers of several US states including California, Texas, and New York have passed new legislation to fast-track the process of testing and deployment of autonomous vehicles on their roads. However, despite their spectacular progress, DNNs, just like traditional software, often demonstrate incorrect or unexpected corner case behaviors that can lead to potentially fatal collisions. Several such real-world accidents involving autonomous cars have already happened including one which resulted in a fatality. Most existing testing techniques for DNN-driven vehicles are heavily dependent on the manual collection of test data under different driving conditions which become prohibitively expensive as the number of test conditions increases. In this paper, we design, implement and evaluate DeepTest, a systematic testing tool for automatically detecting erroneous behaviors of DNN-driven vehicles that can potentially lead to fatal crashes. First, our tool is designed to automatically generated test cases leveraging real-world changes in driving conditions like rain, fog, lighting conditions, etc. DeepTest systematically explores different parts of the DNN logic by generating test inputs that maximize the numbers of activated neurons. DeepTest found thousands of erroneous behaviors under different realistic driving conditions (e.g., blurring, rain, fog, etc.) many of which lead to potentially fatal crashes in three top performing DNNs in the Udacity self-driving car challenge.

  
Access Paper or Ask Questions

DARTS: Deceiving Autonomous Cars with Toxic Signs

May 31, 2018
Chawin Sitawarin, Arjun Nitin Bhagoji, Arsalan Mosenia, Mung Chiang, Prateek Mittal

Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.

* Submitted to ACM CCS 2018; Extended version of [1801.02780] Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos 
  
Access Paper or Ask Questions
1
2
3
4
5
6
7
>>