Alert button
Picture for Pingping Zhang

Pingping Zhang

Alert button

TransY-Net:Learning Fully Transformer Networks for Change Detection of Remote Sensing Images

Oct 22, 2023
Tianyu Yan, Zifu Wan, Pingping Zhang, Gong Cheng, Huchuan Lu

In the remote sensing field, Change Detection (CD) aims to identify and localize the changed regions from dual-phase images over the same places. Recently, it has achieved great progress with the advances of deep learning. However, current methods generally deliver incomplete CD regions and irregular CD boundaries due to the limited representation ability of the extracted visual features. To relieve these issues, in this work we propose a novel Transformer-based learning framework named TransY-Net for remote sensing image CD, which improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner. More specifically, the proposed framework first utilizes the advantages of Transformers in long-range dependency modeling. It can help to learn more discriminative global-level features and obtain complete CD regions. Then, we introduce a novel pyramid structure to aggregate multi-level visual features from Transformers for feature enhancement. The pyramid structure grafted with a Progressive Attention Module (PAM) can improve the feature representation ability with additional inter-dependencies through spatial and channel attentions. Finally, to better train the whole framework, we utilize the deeply-supervised learning with multiple boundary-aware loss functions. Extensive experiments demonstrate that our proposed method achieves a new state-of-the-art performance on four optical and two SAR image CD benchmarks. The source code is released at https://github.com/Drchip61/TransYNet.

* This work is accepted by TGRS2023. It is an extension of our ACCV2022 paper and arXiv:2210.00757 
Viaarxiv icon

Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection

Aug 07, 2023
Xinhao Deng, Pingping Zhang, Wei Liu, Huchuan Lu

Figure 1 for Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection
Figure 2 for Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection
Figure 3 for Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection
Figure 4 for Recurrent Multi-scale Transformer for High-Resolution Salient Object Detection

Salient Object Detection (SOD) aims to identify and segment the most conspicuous objects in an image or video. As an important pre-processing step, it has many potential applications in multimedia and vision tasks. With the advance of imaging devices, SOD with high-resolution images is of great demand, recently. However, traditional SOD methods are largely limited to low-resolution images, making them difficult to adapt to the development of High-Resolution SOD (HRSOD). Although some HRSOD methods emerge, there are no large enough datasets for training and evaluating. Besides, current HRSOD methods generally produce incomplete object regions and irregular object boundaries. To address above issues, in this work, we first propose a new HRS10K dataset, which contains 10,500 high-quality annotated images at 2K-8K resolution. As far as we know, it is the largest dataset for the HRSOD task, which will significantly help future works in training and evaluating models. Furthermore, to improve the HRSOD performance, we propose a novel Recurrent Multi-scale Transformer (RMFormer), which recurrently utilizes shared Transformers and multi-scale refinement architectures. Thus, high-resolution saliency maps can be generated with the guidance of lower-resolution predictions. Extensive experiments on both high-resolution and low-resolution benchmarks show the effectiveness and superiority of the proposed framework. The source code and dataset are released at: https://github.com/DrowsyMon/RMFormer.

* This work is accepted by ACM MM2023. More modifications may be performed for further improvements 
Viaarxiv icon

Video-based Person Re-identification with Long Short-Term Representation Learning

Aug 07, 2023
Xuehu Liu, Pingping Zhang, Huchuan Lu

Figure 1 for Video-based Person Re-identification with Long Short-Term Representation Learning
Figure 2 for Video-based Person Re-identification with Long Short-Term Representation Learning
Figure 3 for Video-based Person Re-identification with Long Short-Term Representation Learning
Figure 4 for Video-based Person Re-identification with Long Short-Term Representation Learning

Video-based person Re-Identification (V-ReID) aims to retrieve specific persons from raw videos captured by non-overlapped cameras. As a fundamental task, it spreads many multimedia and computer vision applications. However, due to the variations of persons and scenes, there are still many obstacles that must be overcome for high performance. In this work, we notice that both the long-term and short-term information of persons are important for robust video representations. Thus, we propose a novel deep learning framework named Long Short-Term Representation Learning (LSTRL) for effective V-ReID. More specifically, to extract long-term representations, we propose a Multi-granularity Appearance Extractor (MAE), in which four granularity appearances are effectively captured across multiple frames. Meanwhile, to extract short-term representations, we propose a Bi-direction Motion Estimator (BME), in which reciprocal motion information is efficiently extracted from consecutive frames. The MAE and BME are plug-and-play and can be easily inserted into existing networks for efficient feature learning. As a result, they significantly improve the feature representation ability for V-ReID. Extensive experiments on three widely used benchmarks show that our proposed approach can deliver better performances than most state-of-the-arts.

* This work is accepted by ICIG2023, including 13 pages, 5 figures and 5 tables. Modifications may be performed for further improvements 
Viaarxiv icon

Geometric Prior Based Deep Human Point Cloud Geometry Compression

May 02, 2023
Xinju Wu, Pingping Zhang, Meng Wang, Peilin Chen, Shiqi Wang, Sam Kwong

Figure 1 for Geometric Prior Based Deep Human Point Cloud Geometry Compression
Figure 2 for Geometric Prior Based Deep Human Point Cloud Geometry Compression
Figure 3 for Geometric Prior Based Deep Human Point Cloud Geometry Compression
Figure 4 for Geometric Prior Based Deep Human Point Cloud Geometry Compression

The emergence of digital avatars has raised an exponential increase in the demand for human point clouds with realistic and intricate details. The compression of such data becomes challenging with overwhelming data amounts comprising millions of points. Herein, we leverage the human geometric prior in geometry redundancy removal of point clouds, greatly promoting the compression performance. More specifically, the prior provides topological constraints as geometry initialization, allowing adaptive adjustments with a compact parameter set that could be represented with only a few bits. Therefore, we can envisage high-resolution human point clouds as a combination of geometric priors and structural deviations. The priors could first be derived with an aligned point cloud, and subsequently the difference of features is compressed into a compact latent code. The proposed framework can operate in a play-and-plug fashion with existing learning based point cloud compression methods. Extensive experimental results show that our approach significantly improves the compression performance without deteriorating the quality, demonstrating its promise in a variety of applications.

Viaarxiv icon

Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification

Apr 27, 2023
Xuehu Liu, Chenyang Yu, Pingping Zhang, Huchuan Lu

Figure 1 for Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification
Figure 2 for Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification
Figure 3 for Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification
Figure 4 for Deeply-Coupled Convolution-Transformer with Spatial-temporal Complementary Learning for Video-based Person Re-identification

Advanced deep Convolutional Neural Networks (CNNs) have shown great success in video-based person Re-Identification (Re-ID). However, they usually focus on the most obvious regions of persons with a limited global representation ability. Recently, it witnesses that Transformers explore the inter-patch relations with global observations for performance improvements. In this work, we take both sides and propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID. Firstly, we couple CNNs and Transformers to extract two kinds of visual features and experimentally verify their complementarity. Further, in spatial, we propose a Complementary Content Attention (CCA) to take advantages of the coupled structure and guide independent features for spatial complementary learning. In temporal, a Hierarchical Temporal Aggregation (HTA) is proposed to progressively capture the inter-frame dependencies and encode temporal information. Besides, a gated attention is utilized to deliver aggregated temporal information into the CNN and Transformer branches for temporal complementary learning. Finally, we introduce a self-distillation training strategy to transfer the superior spatial-temporal knowledge to backbone networks for higher accuracy and more efficiency. In this way, two kinds of typical features from same videos are integrated mechanically for more informative representations. Extensive experiments on four public Re-ID benchmarks demonstrate that our framework could attain better performances than most state-of-the-art methods.

* Accepted by TNNLS, including 11 pages,8 figures,8 tables. Modifications may be performed 
Viaarxiv icon

Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification

Dec 01, 2022
Hu Lu, Xuezhang Zou, Pingping Zhang

Figure 1 for Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification
Figure 2 for Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification
Figure 3 for Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification
Figure 4 for Learning Progressive Modality-shared Transformers for Effective Visible-Infrared Person Re-identification

Visible-Infrared Person Re-Identification (VI-ReID) is a challenging retrieval task under complex modality changes. Existing methods usually focus on extracting discriminative visual features while ignoring the reliability and commonality of visual features between different modalities. In this paper, we propose a novel deep learning framework named Progressive Modality-shared Transformer (PMT) for effective VI-ReID. To reduce the negative effect of modality gaps, we first take the gray-scale images as an auxiliary modality and propose a progressive learning strategy. Then, we propose a Modality-Shared Enhancement Loss (MSEL) to guide the model to explore more reliable identity information from modality-shared features. Finally, to cope with the problem of large intra-class differences and small inter-class differences, we propose a Discriminative Center Loss (DCL) combined with the MSEL to further improve the discrimination of reliable features. Extensive experiments on SYSU-MM01 and RegDB datasets show that our proposed framework performs better than most state-of-the-art methods. For model reproduction, we release the source code at https://github.com/hulu88/PMT.

* This work will appear in AAAI2023, including 8 figures and 6 tables. More improvements may be added 
Viaarxiv icon

Fully Transformer Network for Change Detection of Remote Sensing Images

Oct 03, 2022
Tianyu Yan, Zifu Wan, Pingping Zhang

Figure 1 for Fully Transformer Network for Change Detection of Remote Sensing Images
Figure 2 for Fully Transformer Network for Change Detection of Remote Sensing Images
Figure 3 for Fully Transformer Network for Change Detection of Remote Sensing Images
Figure 4 for Fully Transformer Network for Change Detection of Remote Sensing Images

Recently, change detection (CD) of remote sensing images have achieved great progress with the advances of deep learning. However, current methods generally deliver incomplete CD regions and irregular CD boundaries due to the limited representation ability of the extracted visual features. To relieve these issues, in this work we propose a novel learning framework named Fully Transformer Network (FTN) for remote sensing image CD, which improves the feature extraction from a global view and combines multi-level visual features in a pyramid manner. More specifically, the proposed framework first utilizes the advantages of Transformers in long-range dependency modeling. It can help to learn more discriminative global-level features and obtain complete CD regions. Then, we introduce a pyramid structure to aggregate multi-level visual features from Transformers for feature enhancement. The pyramid structure grafted with a Progressive Attention Module (PAM) can improve the feature representation ability with additional interdependencies through channel attentions. Finally, to better train the framework, we utilize the deeply-supervised learning with multiple boundaryaware loss functions. Extensive experiments demonstrate that our proposed method achieves a new state-of-the-art performance on four public CD benchmarks. For model reproduction, the source code is released at https://github.com/AI-Zhpp/FTN.

* 18 pages, 6 figures and 5 tables. This work will appear in ACCV2022 as a poster paper 
Viaarxiv icon

Distortion-Aware Loop Filtering of Intra 360^o Video Coding with Equirectangular Projection

Feb 20, 2022
Pingping Zhang, Xu Wang, Linwei Zhu, Yun Zhang, Shiqi Wang, Sam Kwong

Figure 1 for Distortion-Aware Loop Filtering of Intra 360^o Video Coding with Equirectangular Projection
Figure 2 for Distortion-Aware Loop Filtering of Intra 360^o Video Coding with Equirectangular Projection
Figure 3 for Distortion-Aware Loop Filtering of Intra 360^o Video Coding with Equirectangular Projection
Figure 4 for Distortion-Aware Loop Filtering of Intra 360^o Video Coding with Equirectangular Projection

In this paper, we propose a distortion-aware loop filtering model to improve the performance of intra coding for 360$^o$ videos projected via equirectangular projection (ERP) format. To enable the awareness of distortion, our proposed module analyzes content characteristics based on a coding unit (CU) partition mask and processes them through partial convolution to activate the specified area. The feature recalibration module, which leverages cascaded residual channel-wise attention blocks (RCABs) to adjust the inter-channel and intra-channel features automatically, is capable of adapting with different quality levels. The perceptual geometry optimization combining with weighted mean squared error (WMSE) and the perceptual loss guarantees both the local field of view (FoV) and global image reconstruction with high quality. Extensive experimental results show that our proposed scheme achieves significant bitrate savings compared with the anchor (HM + 360Lib), leading to 8.9%, 9.0%, 7.1% and 7.4% on average bit rate reductions in terms of PSNR, WPSNR, and PSNR of two viewports for luminance component of 360^o videos, respectively.

Viaarxiv icon

Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling

Dec 08, 2021
Lianjie Jia, Chenyang Yu, Xiehao Ye, Tianyu Yan, Yinjie Lei, Pingping Zhang

Figure 1 for Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling
Figure 2 for Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling
Figure 3 for Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling
Figure 4 for Mind Your Clever Neighbours: Unsupervised Person Re-identification via Adaptive Clustering Relationship Modeling

Unsupervised person re-identification (Re-ID) attracts increasing attention due to its potential to resolve the scalability problem of supervised Re-ID models. Most existing unsupervised methods adopt an iterative clustering mechanism, where the network was trained based on pseudo labels generated by unsupervised clustering. However, clustering errors are inevitable. To generate high-quality pseudo-labels and mitigate the impact of clustering errors, we propose a novel clustering relationship modeling framework for unsupervised person Re-ID. Specifically, before clustering, the relation between unlabeled images is explored based on a graph correlation learning (GCL) module and the refined features are then used for clustering to generate high-quality pseudo-labels.Thus, GCL adaptively mines the relationship between samples in a mini-batch to reduce the impact of abnormal clustering when training. To train the network more effectively, we further propose a selective contrastive learning (SCL) method with a selective memory bank update policy. Extensive experiments demonstrate that our method shows much better results than most state-of-the-art unsupervised methods on Market1501, DukeMTMC-reID and MSMT17 datasets. We will release the code for model reproduction.

* The experimental results are not sufficient 
Viaarxiv icon