Alert button
Picture for Xuanhan Wang

Xuanhan Wang

Alert button

CIParsing: Unifying Causality Properties into Multiple Human Parsing

Aug 23, 2023
Xiaojia Chen, Xuanhan Wang, Lianli Gao, Beitao Chen, Jingkuan Song, HenTao Shen

Figure 1 for CIParsing: Unifying Causality Properties into Multiple Human Parsing
Figure 2 for CIParsing: Unifying Causality Properties into Multiple Human Parsing
Figure 3 for CIParsing: Unifying Causality Properties into Multiple Human Parsing
Figure 4 for CIParsing: Unifying Causality Properties into Multiple Human Parsing

Existing methods of multiple human parsing (MHP) apply statistical models to acquire underlying associations between images and labeled body parts. However, acquired associations often contain many spurious correlations that degrade model generalization, leading statistical models to be vulnerable to visually contextual variations in images (e.g., unseen image styles/external interventions). To tackle this, we present a causality inspired parsing paradigm termed CIParsing, which follows fundamental causal principles involving two causal properties for human parsing (i.e., the causal diversity and the causal invariance). Specifically, we assume that an input image is constructed by a mix of causal factors (the characteristics of body parts) and non-causal factors (external contexts), where only the former ones cause the generation process of human parsing.Since causal/non-causal factors are unobservable, a human parser in proposed CIParsing is required to construct latent representations of causal factors and learns to enforce representations to satisfy the causal properties. In this way, the human parser is able to rely on causal factors w.r.t relevant evidence rather than non-causal factors w.r.t spurious correlations, thus alleviating model degradation and yielding improved parsing ability. Notably, the CIParsing is designed in a plug-and-play fashion and can be integrated into any existing MHP models. Extensive experiments conducted on two widely used benchmarks demonstrate the effectiveness and generalizability of our method.

Viaarxiv icon

RepParser: End-to-End Multiple Human Parsing with Representative Parts

Aug 27, 2022
Xiaojia Chen, Xuanhan Wang, Lianli Gao, Jingkuan Song

Figure 1 for RepParser: End-to-End Multiple Human Parsing with Representative Parts
Figure 2 for RepParser: End-to-End Multiple Human Parsing with Representative Parts
Figure 3 for RepParser: End-to-End Multiple Human Parsing with Representative Parts
Figure 4 for RepParser: End-to-End Multiple Human Parsing with Representative Parts

Existing methods of multiple human parsing usually adopt a two-stage strategy (typically top-down and bottom-up), which suffers from either strong dependence on prior detection or highly computational redundancy during post-grouping. In this work, we present an end-to-end multiple human parsing framework using representative parts, termed RepParser. Different from mainstream methods, RepParser solves the multiple human parsing in a new single-stage manner without resorting to person detection or post-grouping.To this end, RepParser decouples the parsing pipeline into instance-aware kernel generation and part-aware human parsing, which are responsible for instance separation and instance-specific part segmentation, respectively. In particular, we empower the parsing pipeline by representative parts, since they are characterized by instance-aware keypoints and can be utilized to dynamically parse each person instance. Specifically, representative parts are obtained by jointly localizing centers of instances and estimating keypoints of body part regions. After that, we dynamically predict instance-aware convolution kernels through representative parts, thus encoding person-part context into each kernel responsible for casting an image feature as an instance-specific representation.Furthermore, a multi-branch structure is adopted to divide each instance-specific representation into several part-aware representations for separate part segmentation.In this way, RepParser accordingly focuses on person instances with the guidance of representative parts and directly outputs parsing results for each person instance, thus eliminating the requirement of the prior detection or post-grouping.Extensive experiments on two challenging benchmarks demonstrate that our proposed RepParser is a simple yet effective framework and achieves very competitive performance.

Viaarxiv icon

Skeleton-based Action Recognition via Adaptive Cross-Form Learning

Jun 30, 2022
Xuanhan Wang, Yan Dai, Lianli Gao, Jingkuan Song

Figure 1 for Skeleton-based Action Recognition via Adaptive Cross-Form Learning
Figure 2 for Skeleton-based Action Recognition via Adaptive Cross-Form Learning
Figure 3 for Skeleton-based Action Recognition via Adaptive Cross-Form Learning
Figure 4 for Skeleton-based Action Recognition via Adaptive Cross-Form Learning

Skeleton-based action recognition aims to project skeleton sequences to action categories, where skeleton sequences are derived from multiple forms of pre-detected points. Compared with earlier methods that focus on exploring single-form skeletons via Graph Convolutional Networks (GCNs), existing methods tend to improve GCNs by leveraging multi-form skeletons due to their complementary cues. However, these methods (either adapting structure of GCNs or model ensemble) require the co-existence of all forms of skeletons during both training and inference stages, while a typical situation in real life is the existence of only partial forms for inference. To tackle this issue, we present Adaptive Cross-Form Learning (ACFL), which empowers well-designed GCNs to generate complementary representation from single-form skeletons without changing model capacity. Specifically, each GCN model in ACFL not only learns action representation from the single-form skeletons, but also adaptively mimics useful representations derived from other forms of skeletons. In this way, each GCN can learn how to strengthen what has been learned, thus exploiting model potential and facilitating action recognition as well. Extensive experiments conducted on three challenging benchmarks, i.e., NTU-RGB+D 120, NTU-RGB+D 60 and UAV-Human, demonstrate the effectiveness and generalizability of the proposed method. Specifically, the ACFL significantly improves various GCN models (i.e., CTR-GCN, MS-G3D, and Shift-GCN), achieving a new record for skeleton-based action recognition.

Viaarxiv icon

KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing

Jun 21, 2022
Xuanhan Wang, Jingkuan Song, Xiaojia Chen, Lechao Cheng, Lianli Gao, Heng Tao Shen

Figure 1 for KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing
Figure 2 for KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing
Figure 3 for KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing
Figure 4 for KE-RCNN: Unifying Knowledge based Reasoning into Part-level Attribute Parsing

Part-level attribute parsing is a fundamental but challenging task, which requires the region-level visual understanding to provide explainable details of body parts. Most existing approaches address this problem by adding a regional convolutional neural network (RCNN) with an attribute prediction head to a two-stage detector, in which attributes of body parts are identified from local-wise part boxes. However, local-wise part boxes with limit visual clues (i.e., part appearance only) lead to unsatisfying parsing results, since attributes of body parts are highly dependent on comprehensive relations among them. In this article, we propose a Knowledge Embedded RCNN (KE-RCNN) to identify attributes by leveraging rich knowledges, including implicit knowledge (e.g., the attribute ``above-the-hip'' for a shirt requires visual/geometry relations of shirt-hip) and explicit knowledge (e.g., the part of ``shorts'' cannot have the attribute of ``hoodie'' or ``lining''). Specifically, the KE-RCNN consists of two novel components, i.e., Implicit Knowledge based Encoder (IK-En) and Explicit Knowledge based Decoder (EK-De). The former is designed to enhance part-level representation by encoding part-part relational contexts into part boxes, and the latter one is proposed to decode attributes with a guidance of prior knowledge about \textit{part-attribute} relations. In this way, the KE-RCNN is plug-and-play, which can be integrated into any two-stage detectors, e.g., Attribute-RCNN, Cascade-RCNN, HRNet based RCNN and SwinTransformer based RCNN. Extensive experiments conducted on two challenging benchmarks, e.g., Fashionpedia and Kinetics-TPS, demonstrate the effectiveness and generalizability of the KE-RCNN. In particular, it achieves higher improvements over all existing methods, reaching around 3% of AP on Fashionpedia and around 4% of Acc on Kinetics-TPS.

Viaarxiv icon

KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences

Jun 21, 2022
Xuanhan Wang, Lianli Gao, Yixuan Zhou, Jingkuan Song, Meng Wang

Figure 1 for KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences
Figure 2 for KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences
Figure 3 for KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences
Figure 4 for KTN: Knowledge Transfer Network for Learning Multi-person 2D-3D Correspondences

Human densepose estimation, aiming at establishing dense correspondences between 2D pixels of human body and 3D human body template, is a key technique in enabling machines to have an understanding of people in images. It still poses several challenges due to practical scenarios where real-world scenes are complex and only partial annotations are available, leading to incompelete or false estimations. In this work, we present a novel framework to detect the densepose of multiple people in an image. The proposed method, which we refer to Knowledge Transfer Network (KTN), tackles two main problems: 1) how to refine image representation for alleviating incomplete estimations, and 2) how to reduce false estimation caused by the low-quality training labels (i.e., limited annotations and class-imbalance labels). Unlike existing works directly propagating the pyramidal features of regions for densepose estimation, the KTN uses a refinement of pyramidal representation, where it simultaneously maintains feature resolution and suppresses background pixels, and this strategy results in a substantial increase in accuracy. Moreover, the KTN enhances the ability of 3D based body parsing with external knowledges, where it casts 2D based body parsers trained from sufficient annotations as a 3D based body parser through a structural body knowledge graph. In this way, it significantly reduces the adverse effects caused by the low-quality annotations. The effectiveness of KTN is demonstrated by its superior performance to the state-of-the-art methods on DensePose-COCO dataset. Extensive ablation studies and experimental results on representative tasks (e.g., human body segmentation, human part segmentation and keypoints detection) and two popular densepose estimation pipelines (i.e., RCNN and fully-convolutional frameworks), further indicate the generalizability of the proposed method.

* Transaction on Circuits and Systems for Video Technology,2022  
Viaarxiv icon

Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing

Nov 05, 2021
Xuanhan Wang, Xiaojia Chen, Lianli Gao, Lechao Chen, Jingkuan Song

Figure 1 for Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing
Figure 2 for Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing
Figure 3 for Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing
Figure 4 for Technical Report: Disentangled Action Parsing Networks for Accurate Part-level Action Parsing

Part-level Action Parsing aims at part state parsing for boosting action recognition in videos. Despite of dramatic progresses in the area of video classification research, a severe problem faced by the community is that the detailed understanding of human actions is ignored. Our motivation is that parsing human actions needs to build models that focus on the specific problem. We present a simple yet effective approach, named disentangled action parsing (DAP). Specifically, we divided the part-level action parsing into three stages: 1) person detection, where a person detector is adopted to detect all persons from videos as well as performs instance-level action recognition; 2) Part parsing, where a part-parsing model is proposed to recognize human parts from detected person images; and 3) Action parsing, where a multi-modal action parsing network is used to parse action category conditioning on all detection results that are obtained from previous stages. With these three major models applied, our approach of DAP records a global mean of $0.605$ score in 2021 Kinetics-TPS Challenge.

Viaarxiv icon

From General to Specific: Informative Scene Graph Generation via Balance Adjustment

Aug 30, 2021
Yuyu Guo, Lianli Gao, Xuanhan Wang, Yuxuan Hu, Xing Xu, Xu Lu, Heng Tao Shen, Jingkuan Song

Figure 1 for From General to Specific: Informative Scene Graph Generation via Balance Adjustment
Figure 2 for From General to Specific: Informative Scene Graph Generation via Balance Adjustment
Figure 3 for From General to Specific: Informative Scene Graph Generation via Balance Adjustment
Figure 4 for From General to Specific: Informative Scene Graph Generation via Balance Adjustment

The scene graph generation (SGG) task aims to detect visual relationship triplets, i.e., subject, predicate, object, in an image, providing a structural vision layout for scene understanding. However, current models are stuck in common predicates, e.g., "on" and "at", rather than informative ones, e.g., "standing on" and "looking at", resulting in the loss of precise information and overall performance. If a model only uses "stone on road" rather than "blocking" to describe an image, it is easy to misunderstand the scene. We argue that this phenomenon is caused by two key imbalances between informative predicates and common ones, i.e., semantic space level imbalance and training sample level imbalance. To tackle this problem, we propose BA-SGG, a simple yet effective SGG framework based on balance adjustment but not the conventional distribution fitting. It integrates two components: Semantic Adjustment (SA) and Balanced Predicate Learning (BPL), respectively for adjusting these imbalances. Benefited from the model-agnostic process, our method is easily applied to the state-of-the-art SGG models and significantly improves the SGG performance. Our method achieves 14.3%, 8.0%, and 6.1% higher Mean Recall (mR) than that of the Transformer model at three scene graph generation sub-tasks on Visual Genome, respectively. Codes are publicly available.

Viaarxiv icon