Alert button
Picture for Xuejing Kang

Xuejing Kang

Alert button

MT-ORL: Multi-Task Occlusion Relationship Learning

Aug 18, 2021
Panhe Feng, Qi She, Lei Zhu, Jiaxin Li, Lin Zhang, Zijian Feng, Changhu Wang, Chunpeng Li, Xuejing Kang, Anlong Ming

Figure 1 for MT-ORL: Multi-Task Occlusion Relationship Learning
Figure 2 for MT-ORL: Multi-Task Occlusion Relationship Learning
Figure 3 for MT-ORL: Multi-Task Occlusion Relationship Learning
Figure 4 for MT-ORL: Multi-Task Occlusion Relationship Learning

Retrieving occlusion relation among objects in a single image is challenging due to sparsity of boundaries in image. We observe two key issues in existing works: firstly, lack of an architecture which can exploit the limited amount of coupling in the decoder stage between the two subtasks, namely occlusion boundary extraction and occlusion orientation prediction, and secondly, improper representation of occlusion orientation. In this paper, we propose a novel architecture called Occlusion-shared and Path-separated Network (OPNet), which solves the first issue by exploiting rich occlusion cues in shared high-level features and structured spatial information in task-specific low-level features. We then design a simple but effective orthogonal occlusion representation (OOR) to tackle the second issue. Our method surpasses the state-of-the-art methods by 6.1%/8.3% Boundary-AP and 6.5%/10% Orientation-AP on standard PIOD/BSDS ownership datasets. Code is available at https://github.com/fengpanhe/MT-ORL.

* Accepted by ICCV 2021 
Viaarxiv icon

Unifying Nonlocal Blocks for Neural Networks

Aug 17, 2021
Lei Zhu, Qi She, Duo Li, Yanye Lu, Xuejing Kang, Jie Hu, Changhu Wang

Figure 1 for Unifying Nonlocal Blocks for Neural Networks
Figure 2 for Unifying Nonlocal Blocks for Neural Networks
Figure 3 for Unifying Nonlocal Blocks for Neural Networks
Figure 4 for Unifying Nonlocal Blocks for Neural Networks

The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performance, they still lack the mechanism to encode the rich, structured information among elements in an image or video. In this paper, to theoretically analyze the property of these nonlocal-based blocks, we provide a new perspective to interpret them, where we view them as a set of graph filters generated on a fully-connected graph. Specifically, when choosing the Chebyshev graph filter, a unified formulation can be derived for explaining and analyzing the existing nonlocal-based blocks (e.g., nonlocal block, nonlocal stage, double attention block). Furthermore, by concerning the property of spectral, we propose an efficient and robust spectral nonlocal block, which can be more robust and flexible to catch long-range dependencies when inserted into deep neural networks than the existing nonlocal blocks. Experimental results demonstrate the clear-cut improvements and practical applicabilities of our method on image classification, action recognition, semantic segmentation, and person re-identification tasks.

* Accept by ICCV 2021 Conference 
Viaarxiv icon

DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning

Nov 26, 2019
Panhe Feng, Xuejing Kang, Lizhu Ye, Lei Zhu, Chunpeng Li, Anlong Ming

Figure 1 for DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning
Figure 2 for DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning
Figure 3 for DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning
Figure 4 for DDNet: Dual-path Decoder Network for Occlusion Relationship Reasoning

Occlusion relationship reasoning based on convolution neural networks consists of two subtasks: occlusion boundary extraction and occlusion orientation inference. Due to the essential differences between the two subtasks in the feature expression at the higher and lower stages, it is challenging to carry on them simultaneously in one network. To address this issue, we propose a novel Dual-path Decoder Network, which uniformly extracts occlusion information at higher stages and separates into two paths to recover boundary and occlusion orientation respectively in lower stages. Besides, considering the restriction of occlusion orientation presentation to occlusion orientation learning, we design a new orthogonal representation for occlusion orientation and proposed the Orthogonal Orientation Regression loss which can get rid of the unfitness between occlusion representation and learning and further prompt the occlusion orientation learning. Finally, we apply a multi-scale loss together with our proposed orientation regression loss to guide the boundary and orientation path learning respectively. Experiments demonstrate that our proposed method achieves state-of-the-art results on PIOD and BSDS ownership datasets.

Viaarxiv icon