Alert button
Picture for Wentao Tan

Wentao Tan

Alert button

Style Interleaved Learning for Generalizable Person Re-identification

Jul 07, 2022
Wentao Tan, Pengfei Wang, Changxing Ding, Mingming Gong, Kui Jia

Figure 1 for Style Interleaved Learning for Generalizable Person Re-identification
Figure 2 for Style Interleaved Learning for Generalizable Person Re-identification
Figure 3 for Style Interleaved Learning for Generalizable Person Re-identification
Figure 4 for Style Interleaved Learning for Generalizable Person Re-identification

Domain generalization (DG) for person re-identification (ReID) is a challenging problem, as there is no access to target domain data permitted during the training process. Most existing DG ReID methods employ the same features for the updating of the feature extractor and classifier parameters. This common practice causes the model to overfit to existing feature styles in the source domain, resulting in sub-optimal generalization ability on target domains even if meta-learning is used. To solve this problem, we propose a novel style interleaved learning framework. Unlike conventional learning strategies, interleaved learning incorporates two forward propagations and one backward propagation for each iteration. We employ the features of interleaved styles to update the feature extractor and classifiers using different forward propagations, which helps the model avoid overfitting to certain domain styles. In order to fully explore the advantages of style interleaved learning, we further propose a novel feature stylization approach to diversify feature styles. This approach not only mixes the feature styles of multiple training samples, but also samples new and meaningful feature styles from batch-level style distribution. Extensive experimental results show that our model consistently outperforms state-of-the-art methods on large-scale benchmarks for DG ReID, yielding clear advantages in computational efficiency. Code is available at https://github.com/WentaoTan/Interleaved-Learning.

Viaarxiv icon

Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification

Aug 22, 2021
Pengfei Wang, Changxing Ding, Wentao Tan, Mingming Gong, Kui Jia, Dacheng Tao

Figure 1 for Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification
Figure 2 for Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification
Figure 3 for Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification
Figure 4 for Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification

Unsupervised Domain Adaptive (UDA) object re-identification (Re-ID) aims at adapting a model trained on a labeled source domain to an unlabeled target domain. State-of-the-art object Re-ID approaches adopt clustering algorithms to generate pseudo-labels for the unlabeled target domain. However, the inevitable label noise caused by the clustering procedure significantly degrades the discriminative power of Re-ID model. To address this problem, we propose an uncertainty-aware clustering framework (UCF) for UDA tasks. First, a novel hierarchical clustering scheme is proposed to promote clustering quality. Second, an uncertainty-aware collaborative instance selection method is introduced to select images with reliable labels for model training. Combining both techniques effectively reduces the impact of noisy labels. In addition, we introduce a strong baseline that features a compact contrastive loss. Our UCF method consistently achieves state-of-the-art performance in multiple UDA tasks for object Re-ID, and significantly reduces the gap between unsupervised and supervised Re-ID performance. In particular, the performance of our unsupervised UCF method in the MSMT17$\to$Market1501 task is better than that of the fully supervised setting on Market1501. The code of UCF is available at https://github.com/Wang-pengfei/UCF.

Viaarxiv icon