Alert button
Picture for Dengxin Dai

Dengxin Dai

Alert button

LiDAR Meta Depth Completion

Aug 16, 2023
Wolfgang Boettcher, Lukas Hoyer, Ozan Unal, Ke Li, Dengxin Dai

Depth estimation is one of the essential tasks to be addressed when creating mobile autonomous systems. While monocular depth estimation methods have improved in recent times, depth completion provides more accurate and reliable depth maps by additionally using sparse depth information from other sensors such as LiDAR. However, current methods are specifically trained for a single LiDAR sensor. As the scanning pattern differs between sensors, every new sensor would require re-training a specialized depth completion model, which is computationally inefficient and not flexible. Therefore, we propose to dynamically adapt the depth completion model to the used sensor type enabling LiDAR adaptive depth completion. Specifically, we propose a meta depth completion network that uses data patterns derived from the data to learn a task network to alter weights of the main depth completion network to solve a given depth completion task effectively. The method demonstrates a strong capability to work on multiple LiDAR scanning patterns and can also generalize to scanning patterns that are unseen during training. While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns. It outperforms LiDAR-specific expert models for very sparse cases. These advantages allow flexible deployment of a single depth completion model on different sensors, which could also prove valuable to process the input of nascent LiDAR technology with adaptive instead of fixed scanning patterns.

* Accepted at IROS 2023, v2 has updated author list and fixed a figure caption 
Viaarxiv icon

MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying

Jun 30, 2023
Shaoshuai Shi, Li Jiang, Dengxin Dai, Bernt Schiele

Figure 1 for MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying
Figure 2 for MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying
Figure 3 for MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying
Figure 4 for MTR++: Multi-Agent Motion Prediction with Symmetric Scene Modeling and Guided Intention Querying

Motion prediction is crucial for autonomous driving systems to understand complex driving scenarios and make informed decisions. However, this task is challenging due to the diverse behaviors of traffic participants and complex environmental contexts. In this paper, we propose Motion TRansformer (MTR) frameworks to address these challenges. The initial MTR framework utilizes a transformer encoder-decoder structure with learnable intention queries, enabling efficient and accurate prediction of future trajectories. By customizing intention queries for distinct motion modalities, MTR improves multimodal motion prediction while reducing reliance on dense goal candidates. The framework comprises two essential processes: global intention localization, identifying the agent's intent to enhance overall efficiency, and local movement refinement, adaptively refining predicted trajectories for improved accuracy. Moreover, we introduce an advanced MTR++ framework, extending the capability of MTR to simultaneously predict multimodal motion for multiple agents. MTR++ incorporates symmetric context modeling and mutually-guided intention querying modules to facilitate future behavior interaction among multiple agents, resulting in scene-compliant future trajectories. Extensive experimental results demonstrate that the MTR framework achieves state-of-the-art performance on the highly-competitive motion prediction benchmarks, while the MTR++ framework surpasses its precursor, exhibiting enhanced performance and efficiency in predicting accurate multimodal future trajectories for multiple agents.

* The winning approaches for the Waymo Motion Prediction Challenge in 2022 and 2023 
Viaarxiv icon

HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation

May 22, 2023
Jian Ding, Nan Xue, Gui-Song Xia, Bernt Schiele, Dengxin Dai

Figure 1 for HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation
Figure 2 for HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation
Figure 3 for HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation
Figure 4 for HGFormer: Hierarchical Grouping Transformer for Domain Generalized Semantic Segmentation

Current semantic segmentation models have achieved great success under the independent and identically distributed (i.i.d.) condition. However, in real-world applications, test data might come from a different domain than training data. Therefore, it is important to improve model robustness against domain differences. This work studies semantic segmentation under the domain generalization setting, where a model is trained only on the source domain and tested on the unseen target domain. Existing works show that Vision Transformers are more robust than CNNs and show that this is related to the visual grouping property of self-attention. In this work, we propose a novel hierarchical grouping transformer (HGFormer) to explicitly group pixels to form part-level masks and then whole-level masks. The masks at different scales aim to segment out both parts and a whole of classes. HGFormer combines mask classification results at both scales for class label prediction. We assemble multiple interesting cross-domain settings by using seven public semantic segmentation datasets. Experiments show that HGFormer yields more robust semantic segmentation results than per-pixel classification methods and flat grouping transformers, and outperforms previous methods significantly. Code will be available at https://github.com/dingjiansw101/HGFormer.

* Accepted by CVPR 2023 
Viaarxiv icon

FreePoint: Unsupervised Point Cloud Instance Segmentation

May 11, 2023
Zhikai Zhang, Jian Ding, Li Jiang, Dengxin Dai, Gui-Song Xia

Figure 1 for FreePoint: Unsupervised Point Cloud Instance Segmentation
Figure 2 for FreePoint: Unsupervised Point Cloud Instance Segmentation
Figure 3 for FreePoint: Unsupervised Point Cloud Instance Segmentation
Figure 4 for FreePoint: Unsupervised Point Cloud Instance Segmentation

Instance segmentation of point clouds is a crucial task in 3D field with numerous applications that involve localizing and segmenting objects in a scene. However, achieving satisfactory results requires a large number of manual annotations, which is a time-consuming and expensive process. To alleviate dependency on annotations, we propose a method, called FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds. In detail, we represent the point features by combining coordinates, colors, normals, and self-supervised deep features. Based on the point features, we perform a multicut algorithm to segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model. To alleviate the inaccuracy of coarse masks during training, we propose a weakly-supervised training strategy and corresponding loss. Our work can also serve as an unsupervised pre-training pretext for supervised semantic instance segmentation with limited annotations. For class-agnostic instance segmentation on point clouds, FreePoint largely fills the gap with its fully-supervised counterpart based on the state-of-the-art instance segmentation model Mask3D and even surpasses some previous fully-supervised methods. When serving as a pretext task and fine-tuning on S3DIS, FreePoint outperforms training from scratch by 5.8% AP with only 10% mask annotations.

Viaarxiv icon

Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding

May 08, 2023
Li Jiang, Zetong Yang, Shaoshuai Shi, Vladislav Golyanik, Dengxin Dai, Bernt Schiele

Figure 1 for Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding
Figure 2 for Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding
Figure 3 for Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding
Figure 4 for Self-supervised Pre-training with Masked Shape Prediction for 3D Scene Understanding

Masked signal modeling has greatly advanced self-supervised pre-training for language and 2D images. However, it is still not fully explored in 3D scene understanding. Thus, this paper introduces Masked Shape Prediction (MSP), a new framework to conduct masked signal modeling in 3D scenes. MSP uses the essential 3D semantic cue, i.e., geometric shape, as the prediction target for masked points. The context-enhanced shape target consisting of explicit shape context and implicit deep shape feature is proposed to facilitate exploiting contextual cues in shape prediction. Meanwhile, the pre-training architecture in MSP is carefully designed to alleviate the masked shape leakage from point coordinates. Experiments on multiple 3D understanding tasks on both indoor and outdoor datasets demonstrate the effectiveness of MSP in learning good feature representations to consistently boost downstream performance.

* CVPR 2023 
Viaarxiv icon

EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation

Apr 27, 2023
Suman Saha, Lukas Hoyer, Anton Obukhov, Dengxin Dai, Luc Van Gool

Figure 1 for EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation
Figure 2 for EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation
Figure 3 for EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation
Figure 4 for EDAPS: Enhanced Domain-Adaptive Panoptic Segmentation

With autonomous industries on the rise, domain adaptation of the visual perception stack is an important research direction due to the cost savings promise. Much prior art was dedicated to domain-adaptive semantic segmentation in the synthetic-to-real context. Despite being a crucial output of the perception stack, panoptic segmentation has been largely overlooked by the domain adaptation community. Therefore, we revisit well-performing domain adaptation strategies from other fields, adapt them to panoptic segmentation, and show that they can effectively enhance panoptic domain adaptation. Further, we study the panoptic network design and propose a novel architecture (EDAPS) designed explicitly for domain-adaptive panoptic segmentation. It uses a shared, domain-robust transformer encoder to facilitate the joint adaptation of semantic and instance features, but task-specific decoders tailored for the specific requirements of both domain-adaptive semantic and instance segmentation. As a result, the performance gap seen in challenging panoptic benchmarks is substantially narrowed. EDAPS significantly improves the state-of-the-art performance for panoptic segmentation UDA by a large margin of 25% on SYNTHIA-to-Cityscapes and even 72% on the more challenging SYNTHIA-to-Mapillary Vistas. The implementation is available at https://github.com/susaha/edaps.

Viaarxiv icon

Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation

Apr 26, 2023
Lukas Hoyer, Dengxin Dai, Luc Van Gool

Figure 1 for Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation
Figure 2 for Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation
Figure 3 for Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation
Figure 4 for Domain Adaptive and Generalizable Network Architectures and Training Strategies for Semantic Image Segmentation

Unsupervised domain adaptation (UDA) and domain generalization (DG) enable machine learning models trained on a source domain to perform well on unlabeled or even unseen target domains. As previous UDA&DG semantic segmentation methods are mostly based on outdated networks, we benchmark more recent architectures, reveal the potential of Transformers, and design the DAFormer network tailored for UDA&DG. It is enabled by three training strategies to avoid overfitting to the source domain: While (1) Rare Class Sampling mitigates the bias toward common source domain classes, (2) a Thing-Class ImageNet Feature Distance and (3) a learning rate warmup promote feature transfer from ImageNet pretraining. As UDA&DG are usually GPU memory intensive, most previous methods downscale or crop images. However, low-resolution predictions often fail to preserve fine details while models trained with cropped images fall short in capturing long-range, domain-robust context information. Therefore, we propose HRDA, a multi-resolution framework for UDA&DG, that combines the strengths of small high-resolution crops to preserve fine segmentation details and large low-resolution crops to capture long-range context dependencies with a learned scale attention. DAFormer and HRDA significantly improve the state-of-the-art UDA&DG by more than 10 mIoU on 5 different benchmarks. The implementation is available at https://github.com/lhoyer/HRDA.

Viaarxiv icon

Federated Incremental Semantic Segmentation

Apr 10, 2023
Jiahua Dong, Duzhen Zhang, Yang Cong, Wei Cong, Henghui Ding, Dengxin Dai

Figure 1 for Federated Incremental Semantic Segmentation
Figure 2 for Federated Incremental Semantic Segmentation
Figure 3 for Federated Incremental Semantic Segmentation
Figure 4 for Federated Incremental Semantic Segmentation

Federated learning-based semantic segmentation (FSS) has drawn widespread attention via decentralized training on local clients. However, most FSS models assume categories are fixed in advance, thus heavily undergoing forgetting on old categories in practical applications where local clients receive new categories incrementally while have no memory storage to access old classes. Moreover, new clients collecting novel classes may join in the global training of FSS, which further exacerbates catastrophic forgetting. To surmount the above challenges, we propose a Forgetting-Balanced Learning (FBL) model to address heterogeneous forgetting on old classes from both intra-client and inter-client aspects. Specifically, under the guidance of pseudo labels generated via adaptive class-balanced pseudo labeling, we develop a forgetting-balanced semantic compensation loss and a forgetting-balanced relation consistency loss to rectify intra-client heterogeneous forgetting of old categories with background shift. It performs balanced gradient propagation and relation consistency distillation within local clients. Moreover, to tackle heterogeneous forgetting from inter-client aspect, we propose a task transition monitor. It can identify new classes under privacy protection and store the latest old global model for relation distillation. Qualitative experiments reveal large improvement of our model against comparison methods. The code is available at https://github.com/JiahuaDong/FISS.

* Accepted to CVPR2023 
Viaarxiv icon

TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction

Mar 07, 2023
Zhejun Zhang, Alexander Liniger, Dengxin Dai, Fisher Yu, Luc Van Gool

Figure 1 for TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction
Figure 2 for TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction
Figure 3 for TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction
Figure 4 for TrafficBots: Towards World Models for Autonomous Driving Simulation and Motion Prediction

Data-driven simulation has become a favorable way to train and test autonomous driving algorithms. The idea of replacing the actual environment with a learned simulator has also been explored in model-based reinforcement learning in the context of world models. In this work, we show data-driven traffic simulation can be formulated as a world model. We present TrafficBots, a multi-agent policy built upon motion prediction and end-to-end driving, and based on TrafficBots we obtain a world model tailored for the planning module of autonomous vehicles. Existing data-driven traffic simulators are lacking configurability and scalability. To generate configurable behaviors, for each agent we introduce a destination as navigational information, and a time-invariant latent personality that specifies the behavioral style. To improve the scalability, we present a new scheme of positional encoding for angles, allowing all agents to share the same vectorized context and the use of an architecture based on dot-product attention. As a result, we can simulate all traffic participants seen in dense urban scenarios. Experiments on the Waymo open motion dataset show TrafficBots can simulate realistic multi-agent behaviors and achieve good performance on the motion prediction task.

* Accepted at ICRA 2023. The repository is available at https://github.com/SysCV/TrafficBots 
Viaarxiv icon