Alert button
Picture for Qianni Zhang

Qianni Zhang

Alert button

Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation

Aug 22, 2023
Zongyi Xu, Bo Yuan, Shanshan Zhao, Qianni Zhang, Xinbo Gao

Figure 1 for Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation
Figure 2 for Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation
Figure 3 for Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation
Figure 4 for Hierarchical Point-based Active Learning for Semi-supervised Point Cloud Semantic Segmentation

Impressive performance on point cloud semantic segmentation has been achieved by fully-supervised methods with large amounts of labelled data. As it is labour-intensive to acquire large-scale point cloud data with point-wise labels, many attempts have been made to explore learning 3D point cloud segmentation with limited annotations. Active learning is one of the effective strategies to achieve this purpose but is still under-explored. The most recent methods of this kind measure the uncertainty of each pre-divided region for manual labelling but they suffer from redundant information and require additional efforts for region division. This paper aims at addressing this issue by developing a hierarchical point-based active learning strategy. Specifically, we measure the uncertainty for each point by a hierarchical minimum margin uncertainty module which considers the contextual information at multiple levels. Then, a feature-distance suppression strategy is designed to select important and representative points for manual labelling. Besides, to better exploit the unlabelled data, we build a semi-supervised segmentation framework based on our active strategy. Extensive experiments on the S3DIS and ScanNetV2 datasets demonstrate that the proposed framework achieves 96.5% and 100% performance of fully-supervised baseline with only 0.07% and 0.1% training data, respectively, outperforming the state-of-the-art weakly-supervised and active learning methods. The code will be available at https://github.com/SmiletoE/HPAL.

* International Conference on Computer Vision (ICCV) 2023 
Viaarxiv icon

Joint Dense-Point Representation for Contour-Aware Graph Segmentation

Jun 21, 2023
Kit Mills Bransby, Greg Slabaugh, Christos Bourantas, Qianni Zhang

We present a novel methodology that combines graph and dense segmentation techniques by jointly learning both point and pixel contour representations, thereby leveraging the benefits of each approach. This addresses deficiencies in typical graph segmentation methods where misaligned objectives restrict the network from learning discriminative vertex and contour features. Our joint learning strategy allows for rich and diverse semantic features to be encoded, while alleviating common contour stability issues in dense-based approaches, where pixel-level objectives can lead to anatomically implausible topologies. In addition, we identify scenarios where correct predictions that fall on the contour boundary are penalised and address this with a novel hybrid contour distance loss. Our approach is validated on several Chest X-ray datasets, demonstrating clear improvements in segmentation stability and accuracy against a variety of dense- and point-based methods. Our source code is freely available at: www.github.com/kitbransby/Joint_Graph_Segmentation

* MICCAI 2023 pre-print 
Viaarxiv icon

3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks

Feb 28, 2023
Kit Mills Bransby, Vincenzo Tufaro, Murat Cap, Greg Slabaugh, Christos Bourantas, Qianni Zhang

Figure 1 for 3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks
Figure 2 for 3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks
Figure 3 for 3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks
Figure 4 for 3D Coronary Vessel Reconstruction from Bi-Plane Angiography using Graph Convolutional Networks

X-ray coronary angiography (XCA) is used to assess coronary artery disease and provides valuable information on lesion morphology and severity. However, XCA images are 2D and therefore limit visualisation of the vessel. 3D reconstruction of coronary vessels is possible using multiple views, however lumen border detection in current software is performed manually resulting in limited reproducibility and slow processing time. In this study we propose 3DAngioNet, a novel deep learning (DL) system that enables rapid 3D vessel mesh reconstruction using 2D XCA images from two views. Our approach learns a coarse mesh template using an EfficientB3-UNet segmentation network and projection geometries, and deforms it using a graph convolutional network. 3DAngioNet outperforms similar automated reconstruction methods, offers improved efficiency, and enables modelling of bifurcated vessels. The approach was validated using state-of-the-art software verified by skilled cardiologists.

* Pre-print for IEEE International Symposium on Biomedical Imaging 2023 (ISBI) 
Viaarxiv icon

CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation

Aug 02, 2022
Weiwei Cui, Yaqi Wang, Yilong Li, Dan Song, Xingyong Zuo, Jiaojiao Wang, Yifan Zhang, Huiyu Zhou, Bung san Chong, Liaoyuan Zeng, Qianni Zhang

Figure 1 for CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation
Figure 2 for CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation
Figure 3 for CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation
Figure 4 for CTooth+: A Large-scale Dental Cone Beam Computed Tomography Dataset and Benchmark for Tooth Volume Segmentation

Accurate tooth volume segmentation is a prerequisite for computer-aided dental analysis. Deep learning-based tooth segmentation methods have achieved satisfying performances but require a large quantity of tooth data with ground truth. The dental data publicly available is limited meaning the existing methods can not be reproduced, evaluated and applied in clinical practice. In this paper, we establish a 3D dental CBCT dataset CTooth+, with 22 fully annotated volumes and 146 unlabeled volumes. We further evaluate several state-of-the-art tooth volume segmentation strategies based on fully-supervised learning, semi-supervised learning and active learning, and define the performance principles. This work provides a new benchmark for the tooth volume segmentation task, and the experiment can serve as the baseline for future AI-based dental imaging research and clinical application development.

Viaarxiv icon

DU-Net based Unsupervised Contrastive Learning for Cancer Segmentation in Histology Images

Jun 17, 2022
Yilong Li, Yaqi Wang, Huiyu Zhou, Huaqiong Wang, Gangyong Jia, Qianni Zhang

Figure 1 for DU-Net based Unsupervised Contrastive Learning for Cancer Segmentation in Histology Images
Figure 2 for DU-Net based Unsupervised Contrastive Learning for Cancer Segmentation in Histology Images
Figure 3 for DU-Net based Unsupervised Contrastive Learning for Cancer Segmentation in Histology Images
Figure 4 for DU-Net based Unsupervised Contrastive Learning for Cancer Segmentation in Histology Images

In this paper, we introduce an unsupervised cancer segmentation framework for histology images. The framework involves an effective contrastive learning scheme for extracting distinctive visual representations for segmentation. The encoder is a Deep U-Net (DU-Net) structure that contains an extra fully convolution layer compared to the normal U-Net. A contrastive learning scheme is developed to solve the problem of lacking training sets with high-quality annotations on tumour boundaries. A specific set of data augmentation techniques are employed to improve the discriminability of the learned colour features from contrastive learning. Smoothing and noise elimination are conducted using convolutional Conditional Random Fields. The experiments demonstrate competitive performance in segmentation even better than some popular supervised networks.

* arXiv admin note: text overlap with arXiv:2002.05709 by other authors 
Viaarxiv icon

CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images

Jun 17, 2022
Weiwei Cui, Yaqi Wang, Qianni Zhang, Huiyu Zhou, Dan Song, Xingyong Zuo, Gangyong Jia, Liaoyuan Zeng

Figure 1 for CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images
Figure 2 for CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images
Figure 3 for CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images
Figure 4 for CTooth: A Fully Annotated 3D Dataset and Benchmark for Tooth Volume Segmentation on Cone Beam Computed Tomography Images

3D tooth segmentation is a prerequisite for computer-aided dental diagnosis and treatment. However, segmenting all tooth regions manually is subjective and time-consuming. Recently, deep learning-based segmentation methods produce convincing results and reduce manual annotation efforts, but it requires a large quantity of ground truth for training. To our knowledge, there are few tooth data available for the 3D segmentation study. In this paper, we establish a fully annotated cone beam computed tomography dataset CTooth with tooth gold standard. This dataset contains 22 volumes (7363 slices) with fine tooth labels annotated by experienced radiographic interpreters. To ensure a relative even data sampling distribution, data variance is included in the CTooth including missing teeth and dental restoration. Several state-of-the-art segmentation methods are evaluated on this dataset. Afterwards, we further summarise and apply a series of 3D attention-based Unet variants for segmenting tooth volumes. This work provides a new benchmark for the tooth volume segmentation task. Experimental evidence proves that attention modules of the 3D UNet structure boost responses in tooth areas and inhibit the influence of background and noise. The best performance is achieved by 3D Unet with SKNet attention module, of 88.04 \% Dice and 78.71 \% IOU, respectively. The attention-based Unet framework outperforms other state-of-the-art methods on the CTooth dataset. The codebase and dataset are released.

Viaarxiv icon

Complexity Reduction of Learned In-Loop Filtering in Video Coding

Mar 17, 2022
Woody Bayliss, Luka Murn, Ebroul Izquierdo, Qianni Zhang, Marta Mrak

Figure 1 for Complexity Reduction of Learned In-Loop Filtering in Video Coding
Figure 2 for Complexity Reduction of Learned In-Loop Filtering in Video Coding
Figure 3 for Complexity Reduction of Learned In-Loop Filtering in Video Coding
Figure 4 for Complexity Reduction of Learned In-Loop Filtering in Video Coding

In video coding, in-loop filters are applied on reconstructed video frames to enhance their perceptual quality, before storing the frames for output. Conventional in-loop filters are obtained by hand-crafted methods. Recently, learned filters based on convolutional neural networks that utilize attention mechanisms have been shown to improve upon traditional techniques. However, these solutions are typically significantly more computationally expensive, limiting their potential for practical applications. The proposed method uses a novel combination of sparsity and structured pruning for complexity reduction of learned in-loop filters. This is done through a three-step training process of magnitude-guidedweight pruning, insignificant neuron identification and removal, and fine-tuning. Through initial tests we find that network parameters can be significantly reduced with a minimal impact on network performance.

* 5 pages, 3 figures, 2 tables 
Viaarxiv icon

Dispensed Transformer Network for Unsupervised Domain Adaptation

Oct 28, 2021
Yunxiang Li, Jingxiong Li, Ruilong Dan, Shuai Wang, Kai Jin, Guodong Zeng, Jun Wang, Xiangji Pan, Qianni Zhang, Huiyu Zhou, Qun Jin, Li Wang, Yaqi Wang

Figure 1 for Dispensed Transformer Network for Unsupervised Domain Adaptation
Figure 2 for Dispensed Transformer Network for Unsupervised Domain Adaptation
Figure 3 for Dispensed Transformer Network for Unsupervised Domain Adaptation
Figure 4 for Dispensed Transformer Network for Unsupervised Domain Adaptation

Accurate segmentation is a crucial step in medical image analysis and applying supervised machine learning to segment the organs or lesions has been substantiated effective. However, it is costly to perform data annotation that provides ground truth labels for training the supervised algorithms, and the high variance of data that comes from different domains tends to severely degrade system performance over cross-site or cross-modality datasets. To mitigate this problem, a novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper. Our novel DTNet contains three modules. First, a dispensed residual transformer block is designed, which realizes global attention by dispensed interleaving operation and deals with the excessive computational cost and GPU memory usage of the Transformer. Second, a multi-scale consistency regularization is proposed to alleviate the loss of details in the low-resolution output for better feature alignment. Finally, a feature ranking discriminator is introduced to automatically assign different weights to domain-gap features to lessen the feature distribution distance, reducing the performance shift of two domains. The proposed method is evaluated on large fluorescein angiography (FA) retinal nonperfusion (RNP) cross-site dataset with 676 images and a wide used cross-modality dataset from the MM-WHS challenge. Extensive results demonstrate that our proposed network achieves the best performance in comparison with several state-of-the-art techniques.

* 10 pages 
Viaarxiv icon

GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation

Sep 30, 2021
Yunxiang Li, Shuai Wang, Jun Wang, Guodong Zeng, Wenjun Liu, Qianni Zhang, Qun Jin, Yaqi Wang

Figure 1 for GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation
Figure 2 for GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation
Figure 3 for GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation
Figure 4 for GT U-Net: A U-Net Like Group Transformer Network for Tooth Root Segmentation

To achieve an accurate assessment of root canal therapy, a fundamental step is to perform tooth root segmentation on oral X-ray images, in that the position of tooth root boundary is significant anatomy information in root canal therapy evaluation. However, the fuzzy boundary makes the tooth root segmentation very challenging. In this paper, we propose a novel end-to-end U-Net like Group Transformer Network (GT U-Net) for the tooth root segmentation. The proposed network retains the essential structure of U-Net but each of the encoders and decoders is replaced by a group Transformer, which significantly reduces the computational cost of traditional Transformer architectures by using the grouping structure and the bottleneck structure. In addition, the proposed GT U-Net is composed of a hybrid structure of convolution and Transformer, which makes it independent of pre-training weights. For optimization, we also propose a shape-sensitive Fourier Descriptor (FD) loss function to make use of shape prior knowledge. Experimental results show that our proposed network achieves the state-of-the-art performance on our collected tooth root segmentation dataset and the public retina dataset DRIVE. Code has been released at https://github.com/Kent0n-Li/GT-U-Net.

* 12th International Workshop, MLMI 2021, Held in Conjunction with MICCAI 2021  
Viaarxiv icon

Structure-aware scale-adaptive networks for cancer segmentation in whole-slide images

Sep 26, 2021
Yibao Sun, Giussepi Lopez, Yaqi Wang, Xingru Huang, Huiyu Zhou, Qianni Zhang

Figure 1 for Structure-aware scale-adaptive networks for cancer segmentation in whole-slide images
Figure 2 for Structure-aware scale-adaptive networks for cancer segmentation in whole-slide images
Figure 3 for Structure-aware scale-adaptive networks for cancer segmentation in whole-slide images
Figure 4 for Structure-aware scale-adaptive networks for cancer segmentation in whole-slide images

Cancer segmentation in whole-slide images is a fundamental step for viable tumour burden estimation, which is of great value for cancer assessment. However, factors like vague boundaries or small regions dissociated from viable tumour areas make it a challenging task. Considering the usefulness of multi-scale features in various vision-related tasks, we present a structure-aware scale-adaptive feature selection method for efficient and accurate cancer segmentation. Based on a segmentation network with a popular encoder-decoder architecture, a scale-adaptive module is proposed for selecting more robust features to represent the vague, non-rigid boundaries. Furthermore, a structural similarity metric is proposed for better tissue structure awareness to deal with small region segmentation. In addition, advanced designs including several attention mechanisms and the selective-kernel convolutions are applied to the baseline network for comparative study purposes. Extensive experimental results show that the proposed structure-aware scale-adaptive networks achieve outstanding performance on liver cancer segmentation when compared to top ten submitted results in the challenge of PAIP 2019. Further evaluation on colorectal cancer segmentation shows that the scale-adaptive module improves the baseline network or outperforms the other excellent designs of attention mechanisms when considering the tradeoff between efficiency and accuracy.

Viaarxiv icon