The acquisition and analysis of high-quality sensor data constitute an essential requirement in shaping the development of fully autonomous driving systems. This process is indispensable for enhancing road safety and ensuring the effectiveness of the technological advancements in the automotive industry. This study introduces the Interaction of Autonomous and Manually-Controlled Vehicles (IAMCV) dataset, a novel and extensive dataset focused on inter-vehicle interactions. The dataset, enriched with a sophisticated array of sensors such as Light Detection and Ranging, cameras, Inertial Measurement Unit/Global Positioning System, and vehicle bus data acquisition, provides a comprehensive representation of real-world driving scenarios that include roundabouts, intersections, country roads, and highways, recorded across diverse locations in Germany. Furthermore, the study shows the versatility of the IAMCV dataset through several proof-of-concept use cases. Firstly, an unsupervised trajectory clustering algorithm illustrates the dataset's capability in categorizing vehicle movements without the need for labeled training data. Secondly, we compare an online camera calibration method with the Robot Operating System-based standard, using images captured in the dataset. Finally, a preliminary test employing the YOLOv8 object-detection model is conducted, augmented by reflections on the transferability of object detection across various LIDAR resolutions. These use cases underscore the practical utility of the collected dataset, emphasizing its potential to advance research and innovation in the area of intelligent vehicles.
For driver observation frameworks, clean datasets collected in controlled simulated environments often serve as the initial training ground. Yet, when deployed under real driving conditions, such simulator-trained models quickly face the problem of distributional shifts brought about by changing illumination, car model, variations in subject appearances, sensor discrepancies, and other environmental alterations. This paper investigates the viability of transferring video-based driver observation models from simulation to real-world scenarios in autonomous vehicles, given the frequent use of simulation data in this domain due to safety issues. To achieve this, we record a dataset featuring actual autonomous driving conditions and involving seven participants engaged in highly distracting secondary activities. To enable direct SIM to REAL transfer, our dataset was designed in accordance with an existing large-scale simulator dataset used as the training source. We utilize the Inflated 3D ConvNet (I3D) model, a popular choice for driver observation, with Gradient-weighted Class Activation Mapping (Grad-CAM) for detailed analysis of model decision-making. Though the simulator-based model clearly surpasses the random baseline, its recognition quality diminishes, with average accuracy dropping from 85.7% to 46.6%. We also observe strong variations across different behavior classes. This underscores the challenges of model transferability, facilitating our research of more robust driver observation systems capable of dealing with real driving conditions.
In this work, we utilized the methodology outlined in the IEEE Standard 2846-2022 for "Assumptions in Safety-Related Models for Automated Driving Systems" to extract information on the behavior of other road users in driving scenarios. This method includes defining high-level scenarios, determining kinematic characteristics, evaluating safety relevance, and making assumptions on reasonably predictable behaviors. The assumptions were expressed as kinematic bounds. The numerical values for these bounds were extracted using Python scripts to process realistic data from the UniD dataset. The resulting information enables Automated Driving Systems designers to specify the parameters and limits of a road user's state in a specific scenario. This information can be utilized to establish starting conditions for testing a vehicle that is equipped with an Automated Driving System in simulations or on actual roads.
This paper presents the development of the JKU-ITS Last Mile Delivery Robot. The proposed approach utilizes a combination of one 3D LIDAR, RGB-D camera, IMU and GPS sensor on top of a mobile robot slope mower. An embedded computer, running ROS1, is utilized to process the sensor data streams to enable 2D and 3D Simultaneous Localization and Mapping, 2D localization and object detection using a convolutional neural network.
The transportation sector accounts for about 25% of global greenhouse gas emissions. Therefore, an improvement of energy efficiency in the traffic sector is crucial to reducing the carbon footprint. Efficiency is typically measured in terms of energy use per traveled distance, e.g. liters of fuel per kilometer. Leading factors that impact the energy efficiency are the type of vehicle, environment, driver behavior, and weather conditions. These varying factors introduce uncertainty in estimating the vehicles' energy efficiency. We propose in this paper an ensemble learning approach based on deep neural networks (ENN) that is designed to reduce the predictive uncertainty and to output measures of such uncertainty. We evaluated it using the publicly available Vehicle Energy Dataset (VED) and compared it with several baselines per vehicle and energy type. The results showed a high predictive performance and they allowed to output a measure of predictive uncertainty.
With the race towards higher levels of automation in vehicles, it is imperative to guarantee the safety of all involved traffic participants. Yet, while high-risk traffic situations between two vehicles are well understood, traffic situations involving more vehicles lack the tools to be properly analyzed. This paper proposes a method to compare Surrogate Safety Measures values in highway multi-vehicle traffic situations such as lane-changes that involve three vehicles. This method allows for a comprehensive statistical analysis and highlights how the safety distance between vehicles is shifted in favor of the traffic conflict between the leading vehicle and the lane-changing vehicle.
The continuous advance of the automotive industry is leading to the emergence of more advanced driver assistance systems that enable the automation of certain tasks and that are undoubtedly aimed at achieving vehicles in which the driving task can be completely delegated. All these advances will bring changes in the paradigm of the automotive market, as is the case of insurance. For this reason, CESVIMAP and the Universidad Carlos III de Madrid are working on an Autonomous Testing pLatform for insurAnce reSearch (ATLAS) to study this technology and obtain first-hand knowledge about the responsibilities of each of the agents involved in the development of the vehicles of the future. This work gathers part of the advancements made in ATLAS, which have made it possible to have an autonomous vehicle with which to perform tests in real environments and demonstrations bringing the vehicle closer to future users. As a result of this work, and in collaboration with the Johannes Kepler University Linz, the impact, degree of acceptance and confidence of users in autonomous vehicles has been studied once they have taken a trip on board a fully autonomous vehicle such as ATLAS. This study has found that, while most users would be willing to use an autonomous vehicle, the same users are concerned about the use of this type of technology. Thus, understanding the reasons for this concern can help define the future of autonomous cars.
In this paper, we present our brand-new platform for Automated Driving research. The chosen vehicle is a RAV4 hybrid SUV from TOYOTA provided with exteroceptive sensors such as a multilayer LIDAR, a monocular camera, Radar and GPS; and proprioceptive sensors such as encoders and a 9-DOF IMU. These sensors are integrated in the vehicle via a main computer running ROS1 under Linux 20.04. Additionally, we installed an open-source ADAS called Comma Two, that runs Openpilot to control the vehicle. The platform is currently being used to research in the field of autonomous vehicles, human and autonomous vehicles interaction, human factors, and energy consumption.
Understanding human driving behavior is crucial to develop autonomous vehicles' algorithms. However, most low level automation, such as the one in advanced driving assistance systems (ADAS), is based on objective safety measures, which are not always aligned with what the drivers perceive as safe and their correspondent driving behavior. Finding the bridge between the subjective perception and objective safety measures has been analyzed in this paper focusing specifically on lane-change scenarios. Results showed statistically significant differences between what is perceived as safe by drivers and objective metrics depending on the specific maneuver and location of drivers.