Alert button
Picture for Fang-Chieh Chou

Fang-Chieh Chou

Alert button

Reachability Analysis for FollowerStopper: Safety Analysis and Experimental Results

Dec 29, 2021
Fang-Chieh Chou, Marsalis Gibson, Rahul Bhadani, Alexandre M. Bayen, Jonathan Sprinkle

Figure 1 for Reachability Analysis for FollowerStopper: Safety Analysis and Experimental Results
Figure 2 for Reachability Analysis for FollowerStopper: Safety Analysis and Experimental Results
Figure 3 for Reachability Analysis for FollowerStopper: Safety Analysis and Experimental Results
Figure 4 for Reachability Analysis for FollowerStopper: Safety Analysis and Experimental Results

Motivated by earlier work and the developer of a new algorithm, the FollowerStopper, this article uses reachability analysis to verify the safety of the FollowerStopper algorithm, which is a controller designed for dampening stop- and-go traffic waves. With more than 1100 miles of driving data collected by our physical platform, we validate our analysis results by comparing it to human driving behaviors. The FollowerStopper controller has been demonstrated to dampen stop-and-go traffic waves at low speed, but previous analysis on its relative safety has been limited to upper and lower bounds of acceleration. To expand upon previous analysis, reachability analysis is used to investigate the safety at the speeds it was originally tested and also at higher speeds. Two formulations of safety analysis with different criteria are shown: distance-based and time headway-based. The FollowerStopper is considered safe with distance-based criterion. However, simulation results demonstrate that the FollowerStopper is not representative of human drivers - it follows too closely behind vehicles, specifically at a distance human would deem as unsafe. On the other hand, under the time headway-based safety analysis, the FollowerStopper is not considered safe anymore. A modified FollowerStopper is proposed to satisfy time-based safety criterion. Simulation results of the proposed FollowerStopper shows that its response represents human driver behavior better.

* 6 pages; 10 figures; ICRA publication 
Viaarxiv icon

Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models

Jan 09, 2021
Abhishek Mohta, Fang-Chieh Chou, Brian C. Becker, Carlos Vallespi-Gonzalez, Nemanja Djuric

Figure 1 for Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models
Figure 2 for Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models
Figure 3 for Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models
Figure 4 for Investigating the Effect of Sensor Modalities in Multi-Sensor Detection-Prediction Models

Detection of surrounding objects and their motion prediction are critical components of a self-driving system. Recently proposed models that jointly address these tasks rely on a number of sensors to achieve state-of-the-art performance. However, this increases system complexity and may result in a brittle model that overfits to any single sensor modality while ignoring others, leading to reduced generalization. We focus on this important problem and analyze the contribution of sensor modalities towards the model performance. In addition, we investigate the use of sensor dropout to mitigate the above-mentioned issues, leading to a more robust, better-performing model on real-world driving data.

Viaarxiv icon

Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models

Nov 05, 2020
Henggang Cui, Fang-Chieh Chou, Jake Charland, Carlos Vallespi-Gonzalez, Nemanja Djuric

Figure 1 for Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models
Figure 2 for Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models
Figure 3 for Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models
Figure 4 for Uncertainty-Aware Vehicle Orientation Estimation for Joint Detection-Prediction Models

Object detection is a critical component of a self-driving system, tasked with inferring the current states of the surrounding traffic actors. While there exist a number of studies on the problem of inferring the position and shape of vehicle actors, understanding actors' orientation remains a challenge for existing state-of-the-art detectors. Orientation is an important property for downstream modules of an autonomous system, particularly relevant for motion prediction of stationary or reversing actors where current approaches struggle. We focus on this task and present a method that extends the existing models that perform joint object detection and motion prediction, allowing us to more accurately infer vehicle orientations. In addition, the approach is able to quantify prediction uncertainty, outputting the probability that the inferred orientation is flipped, which allows for improved motion prediction and safer autonomous operations. Empirical results show the benefits of the approach, obtaining state-of-the-art performance on the open-sourced nuScenes data set.

Viaarxiv icon

Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving

Aug 27, 2020
Sudeep Fadadu, Shreyash Pandey, Darshan Hegde, Yi Shi, Fang-Chieh Chou, Nemanja Djuric, Carlos Vallespi-Gonzalez

Figure 1 for Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving
Figure 2 for Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving
Figure 3 for Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving
Figure 4 for Multi-View Fusion of Sensor Data for Improved Perception and Prediction in Autonomous Driving

We present an end-to-end method for object detection and trajectory prediction utilizing multi-view representations of LiDAR returns. Our method builds on a state-of-the-art Bird's-Eye View (BEV) network that fuses voxelized features from a sequence of historical LiDAR data as well as rasterized high-definition map to perform detection and prediction tasks. We extend the BEV network with additional LiDAR Range-View (RV) features that use the raw LiDAR information in its native, non-quantized representation. The RV feature map is projected into BEV and fused with the BEV features computed from LiDAR and high-definition map. The fused features are then further processed to output the final detections and trajectories, within a single end-to-end trainable network. In addition, using this framework the RV fusion of LiDAR and camera is performed in a straightforward and computational efficient manner. The proposed approach improves the state-of-the-art on proprietary large-scale real-world data collected by a fleet of self-driving vehicles, as well as on the public nuScenes data set.

Viaarxiv icon

MultiXNet: Multiclass Multistage Multimodal Motion Prediction

Jun 10, 2020
Nemanja Djuric, Henggang Cui, Zhaoen Su, Shangxuan Wu, Huahua Wang, Fang-Chieh Chou, Luisa San Martin, Song Feng, Rui Hu, Yang Xu, Alyssa Dayan, Sidney Zhang, Brian C. Becker, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Carl K. Wellington

Figure 1 for MultiXNet: Multiclass Multistage Multimodal Motion Prediction
Figure 2 for MultiXNet: Multiclass Multistage Multimodal Motion Prediction
Figure 3 for MultiXNet: Multiclass Multistage Multimodal Motion Prediction
Figure 4 for MultiXNet: Multiclass Multistage Multimodal Motion Prediction

One of the critical pieces of the self-driving puzzle is understanding the surroundings of the self-driving vehicle (SDV) and predicting how these surroundings will change in the near future. To address this task we propose MultiXNet, an end-to-end approach for detection and motion prediction based directly on lidar sensor data. This approach builds on prior work by handling multiple classes of traffic actors, adding a jointly trained second-stage trajectory refinement step, and producing a multimodal probability distribution over future actor motion that includes both multiple discrete traffic behaviors and calibrated continuous uncertainties. The method was evaluated on a large-scale, real-world data set collected by a fleet of SDVs in several cities, with the results indicating that it outperforms existing state-of-the-art approaches.

Viaarxiv icon

MultiNet: Multiclass Multistage Multimodal Motion Prediction

Jun 03, 2020
Nemanja Djuric, Henggang Cui, Zhaoen Su, Shangxuan Wu, Huahua Wang, Fang-Chieh Chou, Luisa San Martin, Song Feng, Rui Hu, Yang Xu, Alyssa Dayan, Sidney Zhang, Brian C. Becker, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Carl K. Wellington

Figure 1 for MultiNet: Multiclass Multistage Multimodal Motion Prediction
Figure 2 for MultiNet: Multiclass Multistage Multimodal Motion Prediction
Figure 3 for MultiNet: Multiclass Multistage Multimodal Motion Prediction
Figure 4 for MultiNet: Multiclass Multistage Multimodal Motion Prediction

One of the critical pieces of the self-driving puzzle is understanding the surroundings of the self-driving vehicle (SDV) and predicting how these surroundings will change in the near future. To address this task we propose MultiNet, an end-to-end approach for detection and motion prediction based directly on lidar sensor data. This approach builds on prior work by handling multiple classes of traffic actors, adding a jointly trained second-stage trajectory refinement step, and producing a multimodal probability distribution over future actor motion that includes both multiple discrete traffic behaviors and calibrated continuous uncertainties. The method was evaluated on a large-scale, real-world data set collected by a fleet of SDVs in several cities, with the results indicating that it outperforms existing state-of-the-art approaches.

Viaarxiv icon

Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization

Apr 14, 2020
Eason Wang, Henggang Cui, Sai Yalamanchi, Mohana Moorthy, Fang-Chieh Chou, Nemanja Djuric

Figure 1 for Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization
Figure 2 for Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization
Figure 3 for Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization
Figure 4 for Improving Movement Predictions of Traffic Actors in Bird's-Eye View Models using GANs and Differentiable Trajectory Rasterization

One of the most critical pieces of the self-driving puzzle is the task of predicting future movement of surrounding traffic actors, which allows the autonomous vehicle to safely and effectively plan its future route in a complex world. Recently, a number of algorithms have been proposed to address this important problem, spurred by a growing interest of researchers from both industry and academia. Methods based on top-down scene rasterization on one side and Generative Adversarial Networks (GANs) on the other have shown to be particularly successful, obtaining state-of-the-art accuracies on the task of traffic movement prediction. In this paper we build upon these two directions and propose a raster-based conditional GAN architecture, powered by a novel differentiable rasterizer module at the input of the conditional discriminator that maps generated trajectories into the raster space in a differentiable manner. This simplifies the task for the discriminator as trajectories that are not scene-compliant are easier to discern, and allows the gradients to flow back forcing the generator to output better, more realistic trajectories. We evaluated the proposed method on a large-scale, real-world data set, showing that it outperforms state-of-the-art GAN-based baselines.

Viaarxiv icon

Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories

Aug 01, 2019
Henggang Cui, Thi Nguyen, Fang-Chieh Chou, Tsung-Han Lin, Jeff Schneider, David Bradley, Nemanja Djuric

Figure 1 for Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories
Figure 2 for Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories
Figure 3 for Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories
Figure 4 for Deep Kinematic Models for Physically Realistic Prediction of Vehicle Trajectories

Self-driving vehicles (SDVs) hold great potential for improving traffic safety and are poised to positively affect the quality of life of millions of people. One of the critical aspects of the autonomous technology is understanding and predicting future movement of vehicles surrounding the SDV. This work presents a deep-learning-based method for physically realistic motion prediction of such traffic actors. Previous work did not explicitly encode physical realism and instead relied on the models to learn the laws of physics directly from the data, potentially resulting in implausible trajectory predictions. To account for this issue we propose a method that seamlessly combines ideas from the AI with physically grounded vehicle motion models. In this way we employ best of the both worlds, coupling powerful learning models with strong physical guarantees for their outputs. The proposed approach is general, being applicable to any type of learning method. Extensive experiments using deep convnets on large-scale, real-world data strongly indicate its benefits, outperforming the existing state-of-the-art.

Viaarxiv icon

Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets

Jun 20, 2019
Fang-Chieh Chou, Tsung-Han Lin, Henggang Cui, Vladan Radosavljevic, Thi Nguyen, Tzu-Kuo Huang, Matthew Niedoba, Jeff Schneider, Nemanja Djuric

Figure 1 for Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets
Figure 2 for Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets
Figure 3 for Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets
Figure 4 for Predicting Motion of Vulnerable Road Users using High-Definition Maps and Efficient ConvNets

Following detection and tracking of traffic actors, prediction of their future motion is the next critical component of a self-driving vehicle (SDV) technology, allowing the SDV to operate safely and efficiently in its environment. This is particularly important when it comes to vulnerable road users (VRUs), such as pedestrians and bicyclists. These actors need to be handled with special care due to an increased risk of injury, as well as the fact that their behavior is less predictable than that of motorized actors. To address this issue, in this paper we present a deep learning-based method for predicting VRU movement, where we rasterize high-definition maps and actor's surroundings into bird's-eye view image used as an input to deep convolutional networks. In addition, we propose a fast architecture suitable for real-time inference, and present a detailed ablation study of various rasterization choices. The results strongly indicate benefits of using the proposed approach for motion prediction of VRUs, both in terms of accuracy and latency.

* Shortened version accepted at the workshop on 'Machine Learning for Intelligent Transportation Systems' at Conference on Neural Information Processing Systems (MLITS), Montreal, Canada, 2018 
Viaarxiv icon

Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks

Mar 01, 2019
Henggang Cui, Vladan Radosavljevic, Fang-Chieh Chou, Tsung-Han Lin, Thi Nguyen, Tzu-Kuo Huang, Jeff Schneider, Nemanja Djuric

Figure 1 for Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks
Figure 2 for Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks
Figure 3 for Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks
Figure 4 for Multimodal Trajectory Predictions for Autonomous Driving using Deep Convolutional Networks

Autonomous driving presents one of the largest problems that the robotics and artificial intelligence communities are facing at the moment, both in terms of difficulty and potential societal impact. Self-driving vehicles (SDVs) are expected to prevent road accidents and save millions of lives while improving the livelihood and life quality of many more. However, despite large interest and a number of industry players working in the autonomous domain, there still remains more to be done in order to develop a system capable of operating at a level comparable to best human drivers. One reason for this is high uncertainty of traffic behavior and large number of situations that an SDV may encounter on the roads, making it very difficult to create a fully generalizable system. To ensure safe and efficient operations, an autonomous vehicle is required to account for this uncertainty and to anticipate a multitude of possible behaviors of traffic actors in its surrounding. We address this critical problem and present a method to predict multiple possible trajectories of actors while also estimating their probabilities. The method encodes each actor's surrounding context into a raster image, used as input by deep convolutional networks to automatically derive relevant features for the task. Following extensive offline evaluation and comparison to state-of-the-art baselines, the method was successfully tested on SDVs in closed-course tests.

* Accepted for publication at IEEE International Conference on Robotics and Automation (ICRA) 2019 
Viaarxiv icon