Alert button
Picture for Qing Cai

Qing Cai

Alert button

CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting

Sep 28, 2023
Shaoxiang Guo, Qing Cai, Lin Qi, Junyu Dong

Figure 1 for CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting
Figure 2 for CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting
Figure 3 for CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting
Figure 4 for CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware Prompting

Contrastive Language-Image Pre-training (CLIP) starts to emerge in many computer vision tasks and has achieved promising performance. However, it remains underexplored whether CLIP can be generalized to 3D hand pose estimation, as bridging text prompts with pose-aware features presents significant challenges due to the discrete nature of joint positions in 3D space. In this paper, we make one of the first attempts to propose a novel 3D hand pose estimator from monocular images, dubbed as CLIP-Hand3D, which successfully bridges the gap between text prompts and irregular detailed pose distribution. In particular, the distribution order of hand joints in various 3D space directions is derived from pose labels, forming corresponding text prompts that are subsequently encoded into text representations. Simultaneously, 21 hand joints in the 3D space are retrieved, and their spatial distribution (in x, y, and z axes) is encoded to form pose-aware features. Subsequently, we maximize semantic consistency for a pair of pose-text features following a CLIP-based contrastive learning paradigm. Furthermore, a coarse-to-fine mesh regressor is designed, which is capable of effectively querying joint-aware cues from the feature pyramid. Extensive experiments on several public hand benchmarks show that the proposed model attains a significantly faster inference speed while achieving state-of-the-art performance compared to methods utilizing the similar scale backbone.

* Accepted In Proceedings of the 31st ACM International Conference on Multimedia (MM' 23) 
Viaarxiv icon

HIPA: Hierarchical Patch Transformer for Single Image Super Resolution

Mar 19, 2022
Qing Cai, Yiming Qian, Jinxing Li, Jun Lv, Yee-Hong Yang, Feng Wu, David Zhang

Figure 1 for HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
Figure 2 for HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
Figure 3 for HIPA: Hierarchical Patch Transformer for Single Image Super Resolution
Figure 4 for HIPA: Hierarchical Patch Transformer for Single Image Super Resolution

Transformer-based architectures start to emerge in single image super resolution (SISR) and have achieved promising performance. Most existing Vision Transformers divide images into the same number of patches with a fixed size, which may not be optimal for restoring patches with different levels of texture richness. This paper presents HIPA, a novel Transformer architecture that progressively recovers the high resolution image using a hierarchical patch partition. Specifically, we build a cascaded model that processes an input image in multiple stages, where we start with tokens with small patch sizes and gradually merge to the full resolution. Such a hierarchical patch mechanism not only explicitly enables feature aggregation at multiple resolutions but also adaptively learns patch-aware features for different image regions, e.g., using a smaller patch for areas with fine details and a larger patch for textureless regions. Meanwhile, a new attention-based position encoding scheme for Transformer is proposed to let the network focus on which tokens should be paid more attention by assigning different weights to different tokens, which is the first time to our best knowledge. Furthermore, we also propose a new multi-reception field attention module to enlarge the convolution reception field from different branches. The experimental results on several public datasets demonstrate the superior performance of the proposed HIPA over previous methods quantitatively and qualitatively.

Viaarxiv icon

An Online RFID Localization in the Manufacturing Shopfloor

May 20, 2018
Andri Ashfahani, Mahardhika Pratama, Edwin Lughofer, Qing Cai, Huang Sheng

Figure 1 for An Online RFID Localization in the Manufacturing Shopfloor
Figure 2 for An Online RFID Localization in the Manufacturing Shopfloor
Figure 3 for An Online RFID Localization in the Manufacturing Shopfloor
Figure 4 for An Online RFID Localization in the Manufacturing Shopfloor

Radio Frequency Identification technology has gained popularity for cheap and easy deployment. In the realm of manufacturing shopfloor, it can be used to track the location of manufacturing objects to achieve better efficiency. The underlying challenge of localization lies in the non-stationary characteristics of manufacturing shopfloor which calls for an adaptive life-long learning strategy in order to arrive at accurate localization results. This paper presents an evolving model based on a novel evolving intelligent system, namely evolving Type-2 Quantum Fuzzy Neural Network (eT2QFNN), which features an interval type-2 quantum fuzzy set with uncertain jump positions. The quantum fuzzy set possesses a graded membership degree which enables better identification of overlaps between classes. The eT2QFNN works fully in the evolving mode where all parameters including the number of rules are automatically adjusted and generated on the fly. The parameter adjustment scenario relies on decoupled extended Kalman filter method. Our numerical study shows that eT2QFNN is able to deliver comparable accuracy compared to state-of-the-art algorithms.

* contains 23 pages and 5 figures, to be submitted to Springer book "Predictive Maintenance in Dynamic Systems" 
Viaarxiv icon

Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems

Nov 25, 2014
Karim Hammoudi, Nabil Ajam, Mohamed Kasraoui, Fadi Dornaika, Karan Radhakrishnan, Karthik Bandi, Qing Cai, Sai Liu

Figure 1 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 2 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 3 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems
Figure 4 for Design, Implementation and Simulation of a Cloud Computing System for Enhancing Real-time Video Services by using VANET and Onboard Navigation Systems

In this paper, we propose a design for novel and experimental cloud computing systems. The proposed system aims at enhancing computational, communicational and annalistic capabilities of road navigation services by merging several independent technologies, namely vision-based embedded navigation systems, prominent Cloud Computing Systems (CCSs) and Vehicular Ad-hoc NETwork (VANET). This work presents our initial investigations by describing the design of a global generic system. The designed system has been experimented with various scenarios of video-based road services. Moreover, the associated architecture has been implemented on a small-scale simulator of an in-vehicle embedded system. The implemented architecture has been experimented in the case of a simulated road service to aid the police agency. The goal of this service is to recognize and track searched individuals and vehicles in a real-time monitoring system remotely connected to moving cars. The presented work demonstrates the potential of our system for efficiently enhancing and diversifying real-time video services in road environments.

* paper accepted for publication in the proceedings of the "17\`eme Colloque Compression et Repr\'esentation des Signaux Audiovisuels" (CORESA), 5p., Reims, France, 2014. (preprint) 
Viaarxiv icon