Alert button
Picture for Yaobin Chen

Yaobin Chen

Alert button

An Efficient Probabilistic Solution to Mapping Errors in LiDAR-Camera Fusion for Autonomous Vehicles

Nov 08, 2023
Dan Shen, Zhengming Zhang, Renran Tian, Yaobin Chen, Rini Sherony

LiDAR-camera fusion is one of the core processes for the perception system of current automated driving systems. The typical sensor fusion process includes a list of coordinate transformation operations following system calibration. Although a significant amount of research has been done to improve the fusion accuracy, there are still inherent data mapping errors in practice related to system synchronization offsets, vehicle vibrations, the small size of the target, and fast relative moving speeds. Moreover, more and more complicated algorithms to improve fusion accuracy can overwhelm the onboard computational resources, limiting the actual implementation. This study proposes a novel and low-cost probabilistic LiDAR-Camera fusion method to alleviate these inherent mapping errors in scene reconstruction. By calculating shape similarity using KL-divergence and applying RANSAC-regression-based trajectory smoother, the effects of LiDAR-camera mapping errors are minimized in object localization and distance estimation. Designed experiments are conducted to prove the robustness and effectiveness of the proposed strategy.

Viaarxiv icon

Risk assessment and mitigation of e-scooter crashes with naturalistic driving data

Dec 24, 2022
Avinash Prabu, Renran Tian, Stanley Chien, Lingxi Li, Yaobin Chen, Rini Sherony

Figure 1 for Risk assessment and mitigation of e-scooter crashes with naturalistic driving data
Figure 2 for Risk assessment and mitigation of e-scooter crashes with naturalistic driving data
Figure 3 for Risk assessment and mitigation of e-scooter crashes with naturalistic driving data
Figure 4 for Risk assessment and mitigation of e-scooter crashes with naturalistic driving data

Recently, e-scooter-involved crashes have increased significantly but little information is available about the behaviors of on-road e-scooter riders. Most existing e-scooter crash research was based on retrospectively descriptive media reports, emergency room patient records, and crash reports. This paper presents a naturalistic driving study with a focus on e-scooter and vehicle encounters. The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling, baseline behavior modeling, and the potential future development of in-vehicle mitigation algorithms. The data was collected using an instrumented vehicle and an e-scooter rider wearable system, respectively. A three-step data analysis process is developed. First, semi-automatic data labeling extracts e-scooter rider images and non-rider human images in similar environments to train an e-scooter-rider classifier. Then, a multi-step scene reconstruction pipeline generates vehicle and e-scooter trajectories in all encounters. The final step is to model e-scooter rider behaviors and e-scooter-vehicle encounter scenarios. A total of 500 vehicle to e-scooter interactions are analyzed. The variables pertaining to the same are also discussed in this paper.

Viaarxiv icon

A Wearable Data Collection System for Studying Micro-Level E-Scooter Behavior in Naturalistic Road Environment

Dec 22, 2022
Avinash Prabu, Dan Shen, Renran Tian, Stanley Chien, Lingxi Li, Yaobin Chen, Rini Sherony

Figure 1 for A Wearable Data Collection System for Studying Micro-Level E-Scooter Behavior in Naturalistic Road Environment
Figure 2 for A Wearable Data Collection System for Studying Micro-Level E-Scooter Behavior in Naturalistic Road Environment
Figure 3 for A Wearable Data Collection System for Studying Micro-Level E-Scooter Behavior in Naturalistic Road Environment
Figure 4 for A Wearable Data Collection System for Studying Micro-Level E-Scooter Behavior in Naturalistic Road Environment

As one of the most popular micro-mobility options, e-scooters are spreading in hundreds of big cities and college towns in the US and worldwide. In the meantime, e-scooters are also posing new challenges to traffic safety. In general, e-scooters are suggested to be ridden in bike lanes/sidewalks or share the road with cars at the maximum speed of about 15-20 mph, which is more flexible and much faster than the pedestrains and bicyclists. These features make e-scooters challenging for human drivers, pedestrians, vehicle active safety modules, and self-driving modules to see and interact. To study this new mobility option and address e-scooter riders' and other road users' safety concerns, this paper proposes a wearable data collection system for investigating the micro-level e-Scooter motion behavior in a Naturalistic road environment. An e-Scooter-based data acquisition system has been developed by integrating LiDAR, cameras, and GPS using the robot operating system (ROS). Software frameworks are developed to support hardware interfaces, sensor operation, sensor synchronization, and data saving. The integrated system can collect data continuously for hours, meeting all the requirements including calibration accuracy and capability of collecting the vehicle and e-Scooter encountering data.

* Conference: Fast-zero'21, Kanazawa, Japan Date of publication: Sep 2021 Publisher: JSAE 
Viaarxiv icon

SceNDD: A Scenario-based Naturalistic Driving Dataset

Dec 22, 2022
Avinash Prabu, Nitya Ranjan, Lingxi Li, Renran Tian, Stanley Chien, Yaobin Chen, Rini Sherony

Figure 1 for SceNDD: A Scenario-based Naturalistic Driving Dataset
Figure 2 for SceNDD: A Scenario-based Naturalistic Driving Dataset
Figure 3 for SceNDD: A Scenario-based Naturalistic Driving Dataset
Figure 4 for SceNDD: A Scenario-based Naturalistic Driving Dataset

In this paper, we propose SceNDD: a scenario-based naturalistic driving dataset that is built upon data collected from an instrumented vehicle in downtown Indianapolis. The data collection was completed in 68 driving sessions with different drivers, where each session lasted about 20--40 minutes. The main goal of creating this dataset is to provide the research community with real driving scenarios that have diverse trajectories and driving behaviors. The dataset contains ego-vehicle's waypoints, velocity, yaw angle, as well as non-ego actor's waypoints, velocity, yaw angle, entry-time, and exit-time. Certain flexibility is provided to users so that actors, sensors, lanes, roads, and obstacles can be added to the existing scenarios. We used a Joint Probabilistic Data Association (JPDA) tracker to detect non-ego vehicles on the road. We present some preliminary results of the proposed dataset and a few applications associated with it. The complete dataset is expected to be released by early 2023.

* Conference: 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC). Link: https://ieeexplore.ieee.org/document/9921953 
Viaarxiv icon

PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car

Dec 05, 2021
Tina Chen, Renran Tian, Yaobin Chen, Joshua Domeyer, Heishiro Toyoda, Rini Sherony, Taotao Jing, Zhengming Ding

Figure 1 for PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car
Figure 2 for PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car
Figure 3 for PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car
Figure 4 for PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car

Prediction of pedestrian behavior is critical for fully autonomous vehicles to drive in busy city streets safely and efficiently. The future autonomous cars need to fit into mixed conditions with not only technical but also social capabilities. As more algorithms and datasets have been developed to predict pedestrian behaviors, these efforts lack the benchmark labels and the capability to estimate the temporal-dynamic intent changes of the pedestrians, provide explanations of the interaction scenes, and support algorithms with social intelligence. This paper proposes and shares another benchmark dataset called the IUPUI-CSRC Pedestrian Situated Intent (PSI) data with two innovative labels besides comprehensive computer vision labels. The first novel label is the dynamic intent changes for the pedestrians to cross in front of the ego-vehicle, achieved from 24 drivers with diverse backgrounds. The second one is the text-based explanations of the driver reasoning process when estimating pedestrian intents and predicting their behaviors during the interaction period. These innovative labels can enable several computer vision tasks, including pedestrian intent/behavior prediction, vehicle-pedestrian interaction segmentation, and video-to-language mapping for explainable algorithms. The released dataset can fundamentally improve the development of pedestrian behavior prediction models and develop socially intelligent autonomous cars to interact with pedestrians efficiently. The dataset has been evaluated with different tasks and is released to the public to access.

Viaarxiv icon