Alert button
Picture for Weihao Xuan

Weihao Xuan

Alert button

3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds

Apr 03, 2023
Aoran Xiao, Jiaxing Huang, Weihao Xuan, Ruijie Ren, Kangcheng Liu, Dayan Guan, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing

Figure 1 for 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds
Figure 2 for 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds
Figure 3 for 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds
Figure 4 for 3D Semantic Segmentation in the Wild: Learning Generalized Models for Adverse-Condition Point Clouds

Robust point cloud parsing under all-weather conditions is crucial to level-5 autonomy in autonomous driving. However, how to learn a universal 3D semantic segmentation (3DSS) model is largely neglected as most existing benchmarks are dominated by point clouds captured under normal weather. We introduce SemanticSTF, an adverse-weather point cloud dataset that provides dense point-level annotations and allows to study 3DSS under various adverse weather conditions. We study all-weather 3DSS modeling under two setups: 1) domain adaptive 3DSS that adapts from normal-weather data to adverse-weather data; 2) domain generalizable 3DSS that learns all-weather 3DSS models from normal-weather data. Our studies reveal the challenge while existing 3DSS methods encounter adverse-weather data, showing the great value of SemanticSTF in steering the future endeavor along this very meaningful research direction. In addition, we design a domain randomization technique that alternatively randomizes the geometry styles of point clouds and aggregates their embeddings, ultimately leading to a generalizable model that can improve 3DSS under various adverse weather effectively. The SemanticSTF and related codes are available at \url{https://github.com/xiaoaoran/SemanticSTF}.

* CVPR2023 
Viaarxiv icon

Multi-agent Interactive Prediction under Challenging Driving Scenarios

Sep 24, 2019
Weihao Xuan, Ruijie Ren, Yeping Hu

Figure 1 for Multi-agent Interactive Prediction under Challenging Driving Scenarios
Figure 2 for Multi-agent Interactive Prediction under Challenging Driving Scenarios
Figure 3 for Multi-agent Interactive Prediction under Challenging Driving Scenarios
Figure 4 for Multi-agent Interactive Prediction under Challenging Driving Scenarios

In order to drive safely on the road, autonomous vehicle is expected to predict future outcomes of its surrounding environment and react properly. In fact, many researchers have been focused on solving behavioral prediction problems for autonomous vehicles. However, very few of them consider multi-agent prediction under challenging driving scenarios such as urban environment. In this paper, we proposed a prediction method that is able to predict various complicated driving scenarios where heterogeneous road entities, signal lights, and static map information are taken into account. Moreover, the proposed multi-agent interactive prediction (MAIP) system is capable of simultaneously predicting any number of road entities while considering their mutual interactions. A case study of a simulated challenging urban intersection scenario is provided to demonstrate the performance and capability of the proposed prediction system.

* submitted to ICRA 2020 
Viaarxiv icon