Alert button
Picture for Xianghao Zang

Xianghao Zang

Alert button

Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval

May 13, 2023
Han Fang, Zhifei Yang, Xianghao Zang, Chao Ban, Hao Sun

Figure 1 for Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval
Figure 2 for Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval
Figure 3 for Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval
Figure 4 for Mask to reconstruct: Cooperative Semantics Completion for Video-text Retrieval

Recently, masked video modeling has been widely explored and significantly improved the model's understanding ability of visual regions at a local level. However, existing methods usually adopt random masking and follow the same reconstruction paradigm to complete the masked regions, which do not leverage the correlations between cross-modal content. In this paper, we present Mask for Semantics Completion (MASCOT) based on semantic-based masked modeling. Specifically, after applying attention-based video masking to generate high-informed and low-informed masks, we propose Informed Semantics Completion to recover masked semantics information. The recovery mechanism is achieved by aligning the masked content with the unmasked visual regions and corresponding textual context, which makes the model capture more text-related details at a patch level. Additionally, we shift the emphasis of reconstruction from irrelevant backgrounds to discriminative parts to ignore regions with low-informed masks. Furthermore, we design dual-mask co-learning to incorporate video cues under different masks and learn more aligned video representation. Our MASCOT performs state-of-the-art performance on four major text-video retrieval benchmarks, including MSR-VTT, LSMDC, ActivityNet, and DiDeMo. Extensive ablation studies demonstrate the effectiveness of the proposed schemes.

Viaarxiv icon

Multi-direction and Multi-scale Pyramid in Transformer for Video-based Pedestrian Retrieval

Feb 12, 2022
Xianghao Zang, Ge Li, Wei Gao

Figure 1 for Multi-direction and Multi-scale Pyramid in Transformer for Video-based Pedestrian Retrieval
Figure 2 for Multi-direction and Multi-scale Pyramid in Transformer for Video-based Pedestrian Retrieval
Figure 3 for Multi-direction and Multi-scale Pyramid in Transformer for Video-based Pedestrian Retrieval
Figure 4 for Multi-direction and Multi-scale Pyramid in Transformer for Video-based Pedestrian Retrieval

In video surveillance, pedestrian retrieval (also called person re-identification) is a critical task. This task aims to retrieve the pedestrian of interest from non-overlapping cameras. Recently, transformer-based models have achieved significant progress for this task. However, these models still suffer from ignoring fine-grained, part-informed information. This paper proposes a multi-direction and multi-scale Pyramid in Transformer (PiT) to solve this problem. In transformer-based architecture, each pedestrian image is split into many patches. Then, these patches are fed to transformer layers to obtain the feature representation of this image. To explore the fine-grained information, this paper proposes to apply vertical division and horizontal division on these patches to generate different-direction human parts. These parts provide more fine-grained information. To fuse multi-scale feature representation, this paper presents a pyramid structure containing global-level information and many pieces of local-level information from different scales. The feature pyramids of all the pedestrian images from the same video are fused to form the final multi-direction and multi-scale feature representation. Experimental results on two challenging video-based benchmarks, MARS and iLIDS-VID, show the proposed PiT achieves state-of-the-art performance. Extensive ablation studies demonstrate the superiority of the proposed pyramid structure. The code is available at https://git.openi.org.cn/zangxh/PiT.git.

* 10 pages, 6 figures, Accepted for publication in IEEE Transactions on Industrial Informatics 
Viaarxiv icon

Exploiting Robust Unsupervised Video Person Re-identification

Nov 18, 2021
Xianghao Zang, Ge Li, Wei Gao, Xiujun Shu

Figure 1 for Exploiting Robust Unsupervised Video Person Re-identification
Figure 2 for Exploiting Robust Unsupervised Video Person Re-identification
Figure 3 for Exploiting Robust Unsupervised Video Person Re-identification
Figure 4 for Exploiting Robust Unsupervised Video Person Re-identification

Unsupervised video person re-identification (reID) methods usually depend on global-level features. And many supervised reID methods employed local-level features and achieved significant performance improvements. However, applying local-level features to unsupervised methods may introduce an unstable performance. To improve the performance stability for unsupervised video reID, this paper introduces a general scheme fusing part models and unsupervised learning. In this scheme, the global-level feature is divided into equal local-level feature. A local-aware module is employed to explore the poentials of local-level feature for unsupervised learning. A global-aware module is proposed to overcome the disadvantages of local-level features. Features from these two modules are fused to form a robust feature representation for each input image. This feature representation has the advantages of local-level feature without suffering from its disadvantages. Comprehensive experiments are conducted on three benchmarks, including PRID2011, iLIDS-VID, and DukeMTMC-VideoReID, and the results demonstrate that the proposed approach achieves state-of-the-art performance. Extensive ablation studies demonstrate the effectiveness and robustness of proposed scheme, local-aware module and global-aware module.

* Preprint version; Accepted by IET Image Processing 
Viaarxiv icon

Learning to Disentangle Scenes for Person Re-identification

Nov 10, 2021
Xianghao Zang, Ge Li, Wei Gao, Xiujun Shu

Figure 1 for Learning to Disentangle Scenes for Person Re-identification
Figure 2 for Learning to Disentangle Scenes for Person Re-identification
Figure 3 for Learning to Disentangle Scenes for Person Re-identification
Figure 4 for Learning to Disentangle Scenes for Person Re-identification

There are many challenging problems in the person re-identification (ReID) task, such as the occlusion and scale variation. Existing works usually tried to solve them by employing a one-branch network. This one-branch network needs to be robust to various challenging problems, which makes this network overburdened. This paper proposes to divide-and-conquer the ReID task. For this purpose, we employ several self-supervision operations to simulate different challenging problems and handle each challenging problem using different networks. Concretely, we use the random erasing operation and propose a novel random scaling operation to generate new images with controllable characteristics. A general multi-branch network, including one master branch and two servant branches, is introduced to handle different scenes. These branches learn collaboratively and achieve different perceptive abilities. In this way, the complex scenes in the ReID task are effectively disentangled, and the burden of each branch is relieved. The results from extensive experiments demonstrate that the proposed method achieves state-of-the-art performances on three ReID benchmarks and two occluded ReID benchmarks. Ablation study also shows that the proposed scheme and operations significantly improve the performance in various scenes.

* Preprint Version; Accepted by Image and Vision Computing 
Viaarxiv icon

Large-Scale Spatio-Temporal Person Re-identification: Algorithm and Benchmark

Jun 24, 2021
Xiujun Shu, Xiao Wang, Xianghao Zang, Shiliang Zhang, Yuanqi Chen, Ge Li, Qi Tian

Figure 1 for Large-Scale Spatio-Temporal Person Re-identification: Algorithm and Benchmark
Figure 2 for Large-Scale Spatio-Temporal Person Re-identification: Algorithm and Benchmark
Figure 3 for Large-Scale Spatio-Temporal Person Re-identification: Algorithm and Benchmark
Figure 4 for Large-Scale Spatio-Temporal Person Re-identification: Algorithm and Benchmark

Person re-identification (re-ID) in the scenario with large spatial and temporal spans has not been fully explored. This is partially because that, existing benchmark datasets were mainly collected with limited spatial and temporal ranges, e.g., using videos recorded in a few days by cameras in a specific region of the campus. Such limited spatial and temporal ranges make it hard to simulate the difficulties of person re-ID in real scenarios. In this work, we contribute a novel Large-scale Spatio-Temporal LaST person re-ID dataset, including 10,862 identities with more than 228k images. Compared with existing datasets, LaST presents more challenging and high-diversity re-ID settings, and significantly larger spatial and temporal ranges. For instance, each person can appear in different cities or countries, and in various time slots from daytime to night, and in different seasons from spring to winter. To our best knowledge, LaST is a novel person re-ID dataset with the largest spatio-temporal ranges. Based on LaST, we verified its challenge by conducting a comprehensive performance evaluation of 14 re-ID algorithms. We further propose an easy-to-implement baseline that works well on such challenging re-ID setting. We also verified that models pre-trained on LaST can generalize well on existing datasets with short-term and cloth-changing scenarios. We expect LaST to inspire future works toward more realistic and challenging re-ID tasks. More information about the dataset is available at https://github.com/shuxjweb/last.git.

Viaarxiv icon