Alert button
Picture for Zizhang Wu

Zizhang Wu

Alert button

ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation

Sep 26, 2023
Zizhang Wu, Zhuozheng Li, Zhi-Gang Fan, Yunzhe Wu, Xiaoquan Wang, Rui Tang, Jian Pu

Figure 1 for ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation
Figure 2 for ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation
Figure 3 for ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation
Figure 4 for ADU-Depth: Attention-based Distillation with Uncertainty Modeling for Depth Estimation

Monocular depth estimation is challenging due to its inherent ambiguity and ill-posed nature, yet it is quite important to many applications. While recent works achieve limited accuracy by designing increasingly complicated networks to extract features with limited spatial geometric cues from a single RGB image, we intend to introduce spatial cues by training a teacher network that leverages left-right image pairs as inputs and transferring the learned 3D geometry-aware knowledge to the monocular student network. Specifically, we present a novel knowledge distillation framework, named ADU-Depth, with the goal of leveraging the well-trained teacher network to guide the learning of the student network, thus boosting the precise depth estimation with the help of extra spatial scene information. To enable domain adaptation and ensure effective and smooth knowledge transfer from teacher to student, we apply both attention-adapted feature distillation and focal-depth-adapted response distillation in the training stage. In addition, we explicitly model the uncertainty of depth estimation to guide distillation in both feature space and result space to better produce 3D-aware knowledge from monocular observations and thus enhance the learning for hard-to-predict image regions. Our extensive experiments on the real depth estimation datasets KITTI and DrivingStereo demonstrate the effectiveness of the proposed method, which ranked 1st on the challenging KITTI online benchmark.

* accepted by CoRL 2023 
Viaarxiv icon

LineMarkNet: Line Landmark Detection for Valet Parking

Sep 25, 2023
Zizhang Wu, Yuanzhu Gan, Tianhao Xu, Rui Tang, Jian Pu

Figure 1 for LineMarkNet: Line Landmark Detection for Valet Parking
Figure 2 for LineMarkNet: Line Landmark Detection for Valet Parking
Figure 3 for LineMarkNet: Line Landmark Detection for Valet Parking
Figure 4 for LineMarkNet: Line Landmark Detection for Valet Parking

We aim for accurate and efficient line landmark detection for valet parking, which is a long-standing yet unsolved problem in autonomous driving. To this end, we present a deep line landmark detection system where we carefully design the modules to be lightweight. Specifically, we first empirically design four general line landmarks including three physical lines and one novel mental line. The four line landmarks are effective for valet parking. We then develop a deep network (LineMarkNet) to detect line landmarks from surround-view cameras where we, via the pre-calibrated homography, fuse context from four separate cameras into the unified bird-eye-view (BEV) space, specifically we fuse the surroundview features and BEV features, then employ the multi-task decoder to detect multiple line landmarks where we apply the center-based strategy for object detection task, and design our graph transformer to enhance the vision transformer with hierarchical level graph reasoning for semantic segmentation task. At last, we further parameterize the detected line landmarks (e.g., intercept-slope form) whereby a novel filtering backend incorporates temporal and multi-view consistency to achieve smooth and stable detection. Moreover, we annotate a large-scale dataset to validate our method. Experimental results show that our framework achieves the enhanced performance compared with several line detection methods and validate the multi-task network's efficiency about the real-time line landmark detection on the Qualcomm 820A platform while meantime keeps superior accuracy, with our deep line landmark detection system.

* 29 pages, 12 figures 
Viaarxiv icon

PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving

Sep 25, 2023
Zizhang Wu, Xinyuan Chen, Fan Song, Yuanzhu Gan, Tianhao Xu, Jian Pu, Rui Tang

Figure 1 for PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving
Figure 2 for PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving
Figure 3 for PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving
Figure 4 for PPD: A New Valet Parking Pedestrian Fisheye Dataset for Autonomous Driving

Pedestrian detection under valet parking scenarios is fundamental for autonomous driving. However, the presence of pedestrians can be manifested in a variety of ways and postures under imperfect ambient conditions, which can adversely affect detection performance. Furthermore, models trained on publicdatasets that include pedestrians generally provide suboptimal outcomes for these valet parking scenarios. In this paper, wepresent the Parking Pedestrian Dataset (PPD), a large-scale fisheye dataset to support research dealing with real-world pedestrians, especially with occlusions and diverse postures. PPD consists of several distinctive types of pedestrians captured with fisheye cameras. Additionally, we present a pedestrian detection baseline on PPD dataset, and introduce two data augmentation techniques to improve the baseline by enhancing the diversity ofthe original dataset. Extensive experiments validate the effectiveness of our novel data augmentation approaches over baselinesand the dataset's exceptional generalizability.

* 9 pages, 6 figures 
Viaarxiv icon

Graph-Segmenter: Graph Transformer with Boundary-aware Attention for Semantic Segmentation

Aug 15, 2023
Zizhang Wu, Yuanzhu Gan, Tianhao Xu, Fan Wang

The transformer-based semantic segmentation approaches, which divide the image into different regions by sliding windows and model the relation inside each window, have achieved outstanding success. However, since the relation modeling between windows was not the primary emphasis of previous work, it was not fully utilized. To address this issue, we propose a Graph-Segmenter, including a Graph Transformer and a Boundary-aware Attention module, which is an effective network for simultaneously modeling the more profound relation between windows in a global view and various pixels inside each window as a local one, and for substantial low-cost boundary adjustment. Specifically, we treat every window and pixel inside the window as nodes to construct graphs for both views and devise the Graph Transformer. The introduced boundary-aware attention module optimizes the edge information of the target objects by modeling the relationship between the pixel on the object's edge. Extensive experiments on three widely used semantic segmentation datasets (Cityscapes, ADE-20k and PASCAL Context) demonstrate that our proposed network, a Graph Transformer with Boundary-aware Attention, can achieve state-of-the-art segmentation performance.

* Front. Comput. Sci. 2023  
Viaarxiv icon

ADD: An Automatic Desensitization Fisheye Dataset for Autonomous Driving

Aug 15, 2023
Zizhang Wu, Chenxin Yuan, Hongyang Wei, Fan Song, Tianhao Xu

Autonomous driving systems require many images for analyzing the surrounding environment. However, there is fewer data protection for private information among these captured images, such as pedestrian faces or vehicle license plates, which has become a significant issue. In this paper, in response to the call for data security laws and regulations and based on the advantages of large Field of View(FoV) of the fisheye camera, we build the first Autopilot Desensitization Dataset, called ADD, and formulate the first deep-learning-based image desensitization framework, to promote the study of image desensitization in autonomous driving scenarios. The compiled dataset consists of 650K images, including different face and vehicle license plate information captured by the surround-view fisheye camera. It covers various autonomous driving scenarios, including diverse facial characteristics and license plate colors. Then, we propose an efficient multitask desensitization network called DesCenterNet as a benchmark on the ADD dataset, which can perform face and vehicle license plate detection and desensitization tasks. Based on ADD, we further provide an evaluation criterion for desensitization performance, and extensive comparison experiments have verified the effectiveness and superiority of our method on image desensitization.

* Engineering Applications of Artificial Intelligence 2023  
Viaarxiv icon

Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention

May 12, 2023
Zizhang Wu, Zhuozheng Li, Zhi-Gang Fan, Yunzhe Wu, Yuanzhu Gan, Jian Pu, Xianzhi Li

Figure 1 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 2 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 3 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention
Figure 4 for Learning Monocular Depth in Dynamic Environment via Context-aware Temporal Attention

The monocular depth estimation task has recently revealed encouraging prospects, especially for the autonomous driving task. To tackle the ill-posed problem of 3D geometric reasoning from 2D monocular images, multi-frame monocular methods are developed to leverage the perspective correlation information from sequential temporal frames. However, moving objects such as cars and trains usually violate the static scene assumption, leading to feature inconsistency deviation and misaligned cost values, which would mislead the optimization algorithm. In this work, we present CTA-Depth, a Context-aware Temporal Attention guided network for multi-frame monocular Depth estimation. Specifically, we first apply a multi-level attention enhancement module to integrate multi-level image features to obtain an initial depth and pose estimation. Then the proposed CTA-Refiner is adopted to alternatively optimize the depth and pose. During the refinement process, context-aware temporal attention (CTA) is developed to capture the global temporal-context correlations to maintain the feature consistency and estimation integrity of moving objects. In particular, we propose a long-range geometry embedding (LGE) module to produce a long-range temporal geometry prior. Our approach achieves significant improvements over state-of-the-art approaches on three benchmark datasets.

* accepted by IJCAI 2023; 9 pages, 5 figures 
Viaarxiv icon

MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts

Feb 21, 2023
Zizhang Wu, Yuanzhu Gan, Lei Wang, Guilian Chen, Jian Pu

Figure 1 for MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts
Figure 2 for MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts
Figure 3 for MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts
Figure 4 for MonoPGC: Monocular 3D Object Detection with Pixel Geometry Contexts

Monocular 3D object detection reveals an economical but challenging task in autonomous driving. Recently center-based monocular methods have developed rapidly with a great trade-off between speed and accuracy, where they usually depend on the object center's depth estimation via 2D features. However, the visual semantic features without sufficient pixel geometry information, may affect the performance of clues for spatial 3D detection tasks. To alleviate this, we propose MonoPGC, a novel end-to-end Monocular 3D object detection framework with rich Pixel Geometry Contexts. We introduce the pixel depth estimation as our auxiliary task and design depth cross-attention pyramid module (DCPM) to inject local and global depth geometry knowledge into visual features. In addition, we present the depth-space-aware transformer (DSAT) to integrate 3D space position and depth-aware features efficiently. Besides, we design a novel depth-gradient positional encoding (DGPE) to bring more distinct pixel geometry contexts into the transformer for better object detection. Extensive experiments demonstrate that our method achieves the state-of-the-art performance on the KITTI dataset.

* Accepted by ICRA 2023 
Viaarxiv icon

MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion

Feb 21, 2023
Zizhang Wu, Guilian Chen, Yuanzhu Gan, Lei Wang, Jian Pu

Figure 1 for MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion
Figure 2 for MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion
Figure 3 for MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion
Figure 4 for MVFusion: Multi-View 3D Object Detection with Semantic-aligned Radar and Camera Fusion

Multi-view radar-camera fused 3D object detection provides a farther detection range and more helpful features for autonomous driving, especially under adverse weather. The current radar-camera fusion methods deliver kinds of designs to fuse radar information with camera data. However, these fusion approaches usually adopt the straightforward concatenation operation between multi-modal features, which ignores the semantic alignment with radar features and sufficient correlations across modals. In this paper, we present MVFusion, a novel Multi-View radar-camera Fusion method to achieve semantic-aligned radar features and enhance the cross-modal information interaction. To achieve so, we inject the semantic alignment into the radar features via the semantic-aligned radar encoder (SARE) to produce image-guided radar features. Then, we propose the radar-guided fusion transformer (RGFT) to fuse our radar and image features to strengthen the two modals' correlation from the global scope via the cross-attention mechanism. Extensive experiments show that MVFusion achieves state-of-the-art performance (51.7% NDS and 45.3% mAP) on the nuScenes dataset. We shall release our code and trained networks upon publication.

* Accepted by ICRA 2023 
Viaarxiv icon