Alert button
Picture for Haosong Zhang

Haosong Zhang

Alert button

Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition

Aug 03, 2022
Mei Chee Leong, Haosong Zhang, Hui Li Tan, Liyuan Li, Joo Hwee Lim

Figure 1 for Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition
Figure 2 for Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition
Figure 3 for Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition
Figure 4 for Combined CNN Transformer Encoder for Enhanced Fine-grained Human Action Recognition

Fine-grained action recognition is a challenging task in computer vision. As fine-grained datasets have small inter-class variations in spatial and temporal space, fine-grained action recognition model requires good temporal reasoning and discrimination of attribute action semantics. Leveraging on CNN's ability in capturing high level spatial-temporal feature representations and Transformer's modeling efficiency in capturing latent semantics and global dependencies, we investigate two frameworks that combine CNN vision backbone and Transformer Encoder to enhance fine-grained action recognition: 1) a vision-based encoder to learn latent temporal semantics, and 2) a multi-modal video-text cross encoder to exploit additional text input and learn cross association between visual and text semantics. Our experimental results show that both our Transformer encoder frameworks effectively learn latent temporal semantics and cross-modality association, with improved recognition performance over CNN vision model. We achieve new state-of-the-art performance on the FineGym benchmark dataset for both proposed architectures.

* The Ninth Workshop on Fine-Grained Visual Categorization (FGVC9) @ CVPR2022 
Viaarxiv icon

Joint Learning On The Hierarchy Representation for Fine-Grained Human Action Recognition

Oct 12, 2021
Mei Chee Leong, Hui Li Tan, Haosong Zhang, Liyuan Li, Feng Lin, Joo Hwee Lim

Figure 1 for Joint Learning On The Hierarchy Representation for Fine-Grained Human Action Recognition
Figure 2 for Joint Learning On The Hierarchy Representation for Fine-Grained Human Action Recognition
Figure 3 for Joint Learning On The Hierarchy Representation for Fine-Grained Human Action Recognition
Figure 4 for Joint Learning On The Hierarchy Representation for Fine-Grained Human Action Recognition

Fine-grained human action recognition is a core research topic in computer vision. Inspired by the recently proposed hierarchy representation of fine-grained actions in FineGym and SlowFast network for action recognition, we propose a novel multi-task network which exploits the FineGym hierarchy representation to achieve effective joint learning and prediction for fine-grained human action recognition. The multi-task network consists of three pathways of SlowOnly networks with gradually increased frame rates for events, sets and elements of fine-grained actions, followed by our proposed integration layers for joint learning and prediction. It is a two-stage approach, where it first learns deep feature representation at each hierarchical level, and is followed by feature encoding and fusion for multi-task learning. Our empirical results on the FineGym dataset achieve a new state-of-the-art performance, with 91.80% Top-1 accuracy and 88.46% mean accuracy for element actions, which are 3.40% and 7.26% higher than the previous best results.

* 2021 IEEE International Conference on Image Processing (ICIP)  
* Camera ready for IEEE ICIP 2021 
Viaarxiv icon