Alert button
Picture for Hongbin Xu

Hongbin Xu

Alert button

CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo

May 17, 2023
Weitao Chen, Hongbin Xu, Zhipeng Zhou, Yang Liu, Baigui Sun, Wenxiong Kang, Xuansong Xie

Figure 1 for CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo
Figure 2 for CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo
Figure 3 for CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo
Figure 4 for CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo

The core of Multi-view Stereo(MVS) is the matching process among reference and source pixels. Cost aggregation plays a significant role in this process, while previous methods focus on handling it via CNNs. This may inherit the natural limitation of CNNs that fail to discriminate repetitive or incorrect matches due to limited local receptive fields. To handle the issue, we aim to involve Transformer into cost aggregation. However, another problem may occur due to the quadratically growing computational complexity caused by Transformer, resulting in memory overflow and inference latency. In this paper, we overcome these limits with an efficient Transformer-based cost aggregation network, namely CostFormer. The Residual Depth-Aware Cost Transformer(RDACT) is proposed to aggregate long-range features on cost volume via self-attention mechanisms along the depth and spatial dimensions. Furthermore, Residual Regression Transformer(RRT) is proposed to enhance spatial attention. The proposed method is a universal plug-in to improve learning-based MVS methods.

* Accepted by IJCAI-23 
Viaarxiv icon

Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering

Apr 18, 2023
Zisheng Chen, Hongbin Xu

Figure 1 for Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering
Figure 2 for Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering
Figure 3 for Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering
Figure 4 for Unsupervised Semantic Segmentation of 3D Point Clouds via Cross-modal Distillation and Super-Voxel Clustering

Semantic segmentation of point clouds usually requires exhausting efforts of human annotations, hence it attracts wide attention to the challenging topic of learning from unlabeled or weaker forms of annotations. In this paper, we take the first attempt for fully unsupervised semantic segmentation of point clouds, which aims to delineate semantically meaningful objects without any form of annotations. Previous works of unsupervised pipeline on 2D images fails in this task of point clouds, due to: 1) Clustering Ambiguity caused by limited magnitude of data and imbalanced class distribution; 2) Irregularity Ambiguity caused by the irregular sparsity of point cloud. Therefore, we propose a novel framework, PointDC, which is comprised of two steps that handle the aforementioned problems respectively: Cross-Modal Distillation (CMD) and Super-Voxel Clustering (SVC). In the first stage of CMD, multi-view visual features are back-projected to the 3D space and aggregated to a unified point feature to distill the training of the point representation. In the second stage of SVC, the point features are aggregated to super-voxels and then fed to the iterative clustering process for excavating semantic classes. PointDC yields a significant improvement over the prior state-of-the-art unsupervised methods, on both the ScanNet-v2 (+18.4 mIoU) and S3DIS (+11.5 mIoU) semantic segmentation benchmarks.

Viaarxiv icon

A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving

Apr 04, 2023
Wanshui Gan, Ningkai Mo, Hongbin Xu, Naoto Yokoya

Figure 1 for A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving
Figure 2 for A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving
Figure 3 for A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving
Figure 4 for A Simple Attempt for 3D Occupancy Estimation in Autonomous Driving

The task of estimating 3D occupancy from surrounding view images is an exciting development in the field of autonomous driving, following the success of Birds Eye View (BEV) perception.This task provides crucial 3D attributes of the driving environment, enhancing the overall understanding and perception of the surrounding space. However, there is still a lack of a baseline to define the task, such as network design, optimization, and evaluation. In this work, we present a simple attempt for 3D occupancy estimation, which is a CNN-based framework designed to reveal several key factors for 3D occupancy estimation. In addition, we explore the relationship between 3D occupancy estimation and other related tasks, such as monocular depth estimation, stereo matching, and BEV perception (3D object detection and map segmentation), which could advance the study on 3D occupancy estimation. For evaluation, we propose a simple sampling strategy to define the metric for occupancy evaluation, which is flexible for current public datasets. Moreover, we establish a new benchmark in terms of the depth estimation metric, where we compare our proposed method with monocular depth estimation methods on the DDAD and Nuscenes datasets.The relevant code will be available in https://github.com/GANWANSHUI/SimpleOccupancy

Viaarxiv icon

Artificial Intelligence System for Detection and Screening of Cardiac Abnormalities using Electrocardiogram Images

Feb 10, 2023
Deyun Zhang, Shijia Geng, Yang Zhou, Weilun Xu, Guodong Wei, Kai Wang, Jie Yu, Qiang Zhu, Yongkui Li, Yonghong Zhao, Xingyue Chen, Rui Zhang, Zhaoji Fu, Rongbo Zhou, Yanqi E, Sumei Fan, Qinghao Zhao, Chuandong Cheng, Nan Peng, Liang Zhang, Linlin Zheng, Jianjun Chu, Hongbin Xu, Chen Tan, Jian Liu, Huayue Tao, Tong Liu, Kangyin Chen, Chenyang Jiang, Xingpeng Liu, Shenda Hong

Figure 1 for Artificial Intelligence System for Detection and Screening of Cardiac Abnormalities using Electrocardiogram Images
Figure 2 for Artificial Intelligence System for Detection and Screening of Cardiac Abnormalities using Electrocardiogram Images
Figure 3 for Artificial Intelligence System for Detection and Screening of Cardiac Abnormalities using Electrocardiogram Images
Figure 4 for Artificial Intelligence System for Detection and Screening of Cardiac Abnormalities using Electrocardiogram Images

The artificial intelligence (AI) system has achieved expert-level performance in electrocardiogram (ECG) signal analysis. However, in underdeveloped countries or regions where the healthcare information system is imperfect, only paper ECGs can be provided. Analysis of real-world ECG images (photos or scans of paper ECGs) remains challenging due to complex environments or interference. In this study, we present an AI system developed to detect and screen cardiac abnormalities (CAs) from real-world ECG images. The system was evaluated on a large dataset of 52,357 patients from multiple regions and populations across the world. On the detection task, the AI system obtained area under the receiver operating curve (AUC) of 0.996 (hold-out test), 0.994 (external test 1), 0.984 (external test 2), and 0.979 (external test 3), respectively. Meanwhile, the detection results of AI system showed a strong correlation with the diagnosis of cardiologists (cardiologist 1 (R=0.794, p<1e-3), cardiologist 2 (R=0.812, p<1e-3)). On the screening task, the AI system achieved AUCs of 0.894 (hold-out test) and 0.850 (external test). The screening performance of the AI system was better than that of the cardiologists (AI system (0.846) vs. cardiologist 1 (0.520) vs. cardiologist 2 (0.480)). Our study demonstrates the feasibility of an accurate, objective, easy-to-use, fast, and low-cost AI system for CA detection and screening. The system has the potential to be used by healthcare professionals, caregivers, and general users to assess CAs based on real-world ECG images.

* 47 pages, 29 figures 
Viaarxiv icon

Semi-supervised Deep Multi-view Stereo

Jul 24, 2022
Hongbin Xu, Zhipeng Zhou, Weitao Cheng, Baigui Sun, Hao Li, Wenxiong Kang

Figure 1 for Semi-supervised Deep Multi-view Stereo
Figure 2 for Semi-supervised Deep Multi-view Stereo
Figure 3 for Semi-supervised Deep Multi-view Stereo
Figure 4 for Semi-supervised Deep Multi-view Stereo

Significant progress has been witnessed in learning-based Multi-view Stereo (MVS) of supervised and unsupervised settings. To combine their respective merits in accuracy and completeness, meantime reducing the demand for expensive labeled data, this paper explores a novel semi-supervised setting of learning-based MVS problem that only a tiny part of the MVS data is attached with dense depth ground truth. However, due to huge variation of scenarios and flexible setting in views, semi-supervised MVS problem (Semi-MVS) may break the basic assumption in classic semi-supervised learning, that unlabeled data and labeled data share the same label space and data distribution. To handle these issues, we propose a novel semi-supervised MVS framework, namely SE-MVS. For the simple case that the basic assumption works in MVS data, consistency regularization encourages the model predictions to be consistent between original sample and randomly augmented sample via constraints on KL divergence. For further troublesome case that the basic assumption is conflicted in MVS data, we propose a novel style consistency loss to alleviate the negative effect caused by the distribution gap. The visual style of unlabeled sample is transferred to labeled sample to shrink the gap, and the model prediction of generated sample is further supervised with the label in original labeled sample. The experimental results on DTU, BlendedMVS, GTA-SFM, and Tanks\&Temples datasets show the superior performance of the proposed method. With the same settings in backbone network, our proposed SE-MVS outperforms its fully-supervised and unsupervised baselines.

* Draft version. Still in submission 
Viaarxiv icon

V4D: Voxel for 4D Novel View Synthesis

May 28, 2022
Wanshui Gan, Hongbin Xu, Yi Huang, Shifeng Chen, Naoto Yokoya

Figure 1 for V4D: Voxel for 4D Novel View Synthesis
Figure 2 for V4D: Voxel for 4D Novel View Synthesis
Figure 3 for V4D: Voxel for 4D Novel View Synthesis
Figure 4 for V4D: Voxel for 4D Novel View Synthesis

Neural radiance fields have made a remarkable breakthrough in the novel view synthesis task at the 3D static scene. However, for the 4D circumstance (e.g., dynamic scene), the performance of the existing method is still limited by the capacity of the neural network, typically in a multilayer perceptron network (MLP). In this paper, we present the method to model the 4D neural radiance field by the 3D voxel, short as V4D, where the 3D voxel has two formats. The first one is to regularly model the bounded 3D space and then use the sampled local 3D feature with the time index to model the density field and the texture field. The second one is in look-up tables (LUTs) format that is for the pixel-level refinement, where the pseudo-surface produced by the volume rendering is utilized as the guidance information to learn a 2D pixel-level refinement mapping. The proposed LUTs-based refinement module achieves the performance gain with a little computational cost and could serve as the plug-and-play module in the novel view synthesis task. Moreover, we propose a more effective conditional positional encoding toward the 4D data that achieves performance gain with negligible computational burdens. Extensive experiments demonstrate that the proposed method achieves state-of-the-art performance by a large margin. At last, the proposed V4D is also a computational-friendly method in both the training and testing phase, where we achieve 2 times faster in the training phase and 10 times faster in the inference phase compared with the state-of-the-art method.

Viaarxiv icon

CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning

Jan 20, 2022
Mingye Xu, Zhipeng Zhou, Hongbin Xu, Yali Wang, Yu Qiao

Figure 1 for CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning
Figure 2 for CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning
Figure 3 for CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning
Figure 4 for CP-Net: Contour-Perturbed Reconstruction Network for Self-Supervised Point Cloud Learning

Self-supervised learning has not been fully explored for point cloud analysis. Current frameworks are mainly based on point cloud reconstruction. Given only 3D coordinates, such approaches tend to learn local geometric structures and contours, while failing in understanding high level semantic content. Consequently, they achieve unsatisfactory performance in downstream tasks such as classification, segmentation, etc. To fill this gap, we propose a generic Contour-Perturbed Reconstruction Network (CP-Net), which can effectively guide self-supervised reconstruction to learn semantic content in the point cloud, and thus promote discriminative power of point cloud representation. First, we introduce a concise contour-perturbed augmentation module for point cloud reconstruction. With guidance of geometry disentangling, we divide point cloud into contour and content components. Subsequently, we perturb the contour components and preserve the content components on the point cloud. As a result, self supervisor can effectively focus on semantic content, by reconstructing the original point cloud from such perturbed one. Second, we use this perturbed reconstruction as an assistant branch, to guide the learning of basic reconstruction branch via a distinct dual-branch consistency loss. In this case, our CP-Net not only captures structural contour but also learn semantic content for discriminative downstream tasks. Finally, we perform extensive experiments on a number of point cloud benchmarks. Part segmentation results demonstrate that our CP-Net (81.5% of mIoU) outperforms the previous self-supervised models, and narrows the gap with the fully-supervised methods. For classification, we get a competitive result with the fully-supervised methods on ModelNet40 (92.5% accuracy) and ScanObjectNN (87.9% accuracy). The codes and models will be released afterwards.

Viaarxiv icon

Digging into Uncertainty in Self-supervised Multi-view Stereo

Sep 08, 2021
Hongbin Xu, Zhipeng Zhou, Yali Wang, Wenxiong Kang, Baigui Sun, Hao Li, Yu Qiao

Figure 1 for Digging into Uncertainty in Self-supervised Multi-view Stereo
Figure 2 for Digging into Uncertainty in Self-supervised Multi-view Stereo
Figure 3 for Digging into Uncertainty in Self-supervised Multi-view Stereo
Figure 4 for Digging into Uncertainty in Self-supervised Multi-view Stereo

Self-supervised Multi-view stereo (MVS) with a pretext task of image reconstruction has achieved significant progress recently. However, previous methods are built upon intuitions, lacking comprehensive explanations about the effectiveness of the pretext task in self-supervised MVS. To this end, we propose to estimate epistemic uncertainty in self-supervised MVS, accounting for what the model ignores. Specially, the limitations can be categorized into two types: ambiguious supervision in foreground and invalid supervision in background. To address these issues, we propose a novel Uncertainty reduction Multi-view Stereo (UMVS) framework for self-supervised learning. To alleviate ambiguous supervision in foreground, we involve extra correspondence prior with a flow-depth consistency loss. The dense 2D correspondence of optical flows is used to regularize the 3D stereo correspondence in MVS. To handle the invalid supervision in background, we use Monte-Carlo Dropout to acquire the uncertainty map and further filter the unreliable supervision signals on invalid regions. Extensive experiments on DTU and Tank&Temples benchmark show that our U-MVS framework achieves the best performance among unsupervised MVS methods, with competitive performance with its supervised opponents.

* This paper is accepted by ICCV-21 as a poster presentation 
Viaarxiv icon