Alert button
Picture for Qifei Wang

Qifei Wang

Alert button

On-device Real-time Custom Hand Gesture Recognition

Sep 19, 2023
Esha Uboweja, David Tian, Qifei Wang, Yi-Chun Kuo, Joe Zou, Lu Wang, George Sung, Matthias Grundmann

Most existing hand gesture recognition (HGR) systems are limited to a predefined set of gestures. However, users and developers often want to recognize new, unseen gestures. This is challenging due to the vast diversity of all plausible hand shapes, e.g. it is impossible for developers to include all hand gestures in a predefined list. In this paper, we present a user-friendly framework that lets users easily customize and deploy their own gesture recognition pipeline. Our framework provides a pre-trained single-hand embedding model that can be fine-tuned for custom gesture recognition. Users can perform gestures in front of a webcam to collect a small amount of images per gesture. We also offer a low-code solution to train and deploy the custom gesture recognition model. This makes it easy for users with limited ML expertise to use our framework. We further provide a no-code web front-end for users without any ML expertise. This makes it even easier to build and test the end-to-end pipeline. The resulting custom HGR is then ready to be run on-device for real-time scenarios. This can be done by calling a simple function in our open-sourced model inference API, MediaPipe Tasks. This entire process only takes a few minutes.

* 5 pages, 6 figures; Accepted to ICCV Workshop on Computer Vision for Metaverse, Paris, France, 2023 
Viaarxiv icon

Forward-Forward Training of an Optical Neural Network

May 30, 2023
Ilker Oguz, Junjie Ke, Qifei Wang, Feng Yang, Mustafa Yildirim, Niyazi Ulas Dinc, Jih-Liang Hsieh, Christophe Moser, Demetri Psaltis

Figure 1 for Forward-Forward Training of an Optical Neural Network
Figure 2 for Forward-Forward Training of an Optical Neural Network
Figure 3 for Forward-Forward Training of an Optical Neural Network
Figure 4 for Forward-Forward Training of an Optical Neural Network

Neural networks (NN) have demonstrated remarkable capabilities in various tasks, but their computation-intensive nature demands faster and more energy-efficient hardware implementations. Optics-based platforms, using technologies such as silicon photonics and spatial light modulators, offer promising avenues for achieving this goal. However, training multiple trainable layers in tandem with these physical systems poses challenges, as they are difficult to fully characterize and describe with differentiable functions, hindering the use of error backpropagation algorithm. The recently introduced Forward-Forward Algorithm (FFA) eliminates the need for perfect characterization of the learning system and shows promise for efficient training with large numbers of programmable parameters. The FFA does not require backpropagating an error signal to update the weights, rather the weights are updated by only sending information in one direction. The local loss function for each set of trainable weights enables low-power analog hardware implementations without resorting to metaheuristic algorithms or reinforcement learning. In this paper, we present an experiment utilizing multimode nonlinear wave propagation in an optical fiber demonstrating the feasibility of the FFA approach using an optical system. The results show that incorporating optical transforms in multilayer NN architectures trained with the FFA, can lead to performance improvements, even with a relatively small number of trainable weights. The proposed method offers a new path to the challenge of training optical NNs and provides insights into leveraging physical transformations for enhancing NN performance.

Viaarxiv icon

MUSIQ: Multi-scale Image Quality Transformer

Aug 12, 2021
Junjie Ke, Qifei Wang, Yilin Wang, Peyman Milanfar, Feng Yang

Figure 1 for MUSIQ: Multi-scale Image Quality Transformer
Figure 2 for MUSIQ: Multi-scale Image Quality Transformer
Figure 3 for MUSIQ: Multi-scale Image Quality Transformer
Figure 4 for MUSIQ: Multi-scale Image Quality Transformer

Image quality assessment (IQA) is an important research topic for understanding and improving visual experience. The current state-of-the-art IQA methods are based on convolutional neural networks (CNNs). The performance of CNN-based models is often compromised by the fixed shape constraint in batch training. To accommodate this, the input images are usually resized and cropped to a fixed shape, causing image quality degradation. To address this, we design a multi-scale image quality Transformer (MUSIQ) to process native resolution images with varying sizes and aspect ratios. With a multi-scale image representation, our proposed method can capture image quality at different granularities. Furthermore, a novel hash-based 2D spatial embedding and a scale embedding is proposed to support the positional embedding in the multi-scale representation. Experimental results verify that our method can achieve state-of-the-art performance on multiple large scale IQA datasets such as PaQ-2-PiQ, SPAQ and KonIQ-10k.

* ICCV 2021 
Viaarxiv icon

Adversarially Adaptive Normalization for Single Domain Generalization

Jun 01, 2021
Xinjie Fan, Qifei Wang, Junjie Ke, Feng Yang, Boqing Gong, Mingyuan Zhou

Figure 1 for Adversarially Adaptive Normalization for Single Domain Generalization
Figure 2 for Adversarially Adaptive Normalization for Single Domain Generalization
Figure 3 for Adversarially Adaptive Normalization for Single Domain Generalization
Figure 4 for Adversarially Adaptive Normalization for Single Domain Generalization

Single domain generalization aims to learn a model that performs well on many unseen domains with only one domain data for training. Existing works focus on studying the adversarial domain augmentation (ADA) to improve the model's generalization capability. The impact on domain generalization of the statistics of normalization layers is still underinvestigated. In this paper, we propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm), to complement the missing part in previous works. ASR-Norm learns both the standardization and rescaling statistics via neural networks. This new form of normalization can be viewed as a generic form of the traditional normalizations. When trained with ADA, the statistics in ASR-Norm are learned to be adaptive to the data coming from different domains, and hence improves the model generalization performance across domains, especially on the target domain with large discrepancy from the source domain. The experimental results show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches by 1.6%, 2.7%, and 6.3% averagely on the Digits, CIFAR-10-C, and PACS benchmarks, respectively. As a generic tool, the improvement introduced by ASR-Norm is agnostic to the choice of ADA methods.

* CVPR 2021 
Viaarxiv icon

Multi-path Neural Networks for On-device Multi-domain Visual Classification

Oct 10, 2020
Qifei Wang, Junjie Ke, Joshua Greaves, Grace Chu, Gabriel Bender, Luciano Sbaiz, Alec Go, Andrew Howard, Feng Yang, Ming-Hsuan Yang, Jeff Gilbert, Peyman Milanfar

Figure 1 for Multi-path Neural Networks for On-device Multi-domain Visual Classification
Figure 2 for Multi-path Neural Networks for On-device Multi-domain Visual Classification
Figure 3 for Multi-path Neural Networks for On-device Multi-domain Visual Classification
Figure 4 for Multi-path Neural Networks for On-device Multi-domain Visual Classification

Learning multiple domains/tasks with a single model is important for improving data efficiency and lowering inference cost for numerous vision tasks, especially on resource-constrained mobile devices. However, hand-crafting a multi-domain/task model can be both tedious and challenging. This paper proposes a novel approach to automatically learn a multi-path network for multi-domain visual classification on mobile devices. The proposed multi-path network is learned from neural architecture search by applying one reinforcement learning controller for each domain to select the best path in the super-network created from a MobileNetV3-like search space. An adaptive balanced domain prioritization algorithm is proposed to balance optimizing the joint model on multiple domains simultaneously. The determined multi-path model selectively shares parameters across domains in shared nodes while keeping domain-specific parameters within non-shared nodes in individual domain paths. This approach effectively reduces the total number of parameters and FLOPS, encouraging positive knowledge transfer while mitigating negative interference across domains. Extensive evaluations on the Visual Decathlon dataset demonstrate that the proposed multi-path model achieves state-of-the-art performance in terms of accuracy, model size, and FLOPS against other approaches using MobileNetV3-like architectures. Furthermore, the proposed method improves average accuracy over learning single-domain models individually, and reduces the total number of parameters and FLOPS by 78% and 32% respectively, compared to the approach that simply bundles single-domain models for multi-domain learning.

* conference 
Viaarxiv icon

Learnable Cost Volume Using the Cayley Representation

Jul 21, 2020
Taihong Xiao, Jinwei Yuan, Deqing Sun, Qifei Wang, Xin-Yu Zhang, Kehan Xu, Ming-Hsuan Yang

Figure 1 for Learnable Cost Volume Using the Cayley Representation
Figure 2 for Learnable Cost Volume Using the Cayley Representation
Figure 3 for Learnable Cost Volume Using the Cayley Representation
Figure 4 for Learnable Cost Volume Using the Cayley Representation

Cost volume is an essential component of recent deep models for optical flow estimation and is usually constructed by calculating the inner product between two feature vectors. However, the standard inner product in the commonly-used cost volume may limit the representation capacity of flow models because it neglects the correlation among different channel dimensions and weighs each dimension equally. To address this issue, we propose a learnable cost volume (LCV) using an elliptical inner product, which generalizes the standard inner product by a positive definite kernel matrix. To guarantee its positive definiteness, we perform spectral decomposition on the kernel matrix and re-parameterize it via the Cayley representation. The proposed LCV is a lightweight module and can be easily plugged into existing models to replace the vanilla cost volume. Experimental results show that the LCV module not only improves the accuracy of state-of-the-art models on standard benchmarks, but also promotes their robustness against illumination change, noises, and adversarial perturbations of the input signals.

* ECCV 2020 
Viaarxiv icon

A Survey of Visual Analysis of Human Motion and Its Applications

Aug 23, 2016
Qifei Wang

This paper summarizes the recent progress in human motion analysis and its applications. In the beginning, we reviewed the motion capture systems and the representation model of human's motion data. Next, we sketched the advanced human motion data processing technologies, including motion data filtering, temporal alignment, and segmentation. The following parts overview the state-of-the-art approaches of action recognition and dynamics measuring since these two are the most active research areas in human motion analysis. The last part discusses some emerging applications of the human motion analysis in healthcare, human robot interaction, security surveillance, virtual reality and animation. The promising research topics of human motion analysis in the future is also summarized in the last part.

* 5 pages, conference paper in VCIP 2016 
Viaarxiv icon

Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect

Dec 21, 2015
Qifei Wang, Gregorij Kurillo, Ferda Ofli, Ruzena Bajcsy

Figure 1 for Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect
Figure 2 for Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect
Figure 3 for Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect
Figure 4 for Remote Health Coaching System and Human Motion Data Analysis for Physical Therapy with Microsoft Kinect

This paper summarizes the recent progress we have made for the computer vision technologies in physical therapy with the accessible and affordable devices. We first introduce the remote health coaching system we build with Microsoft Kinect. Since the motion data captured by Kinect is noisy, we investigate the data accuracy of Kinect with respect to the high accuracy motion capture system. We also propose an outlier data removal algorithm based on the data distribution. In order to generate the kinematic parameter from the noisy data captured by Kinect, we propose a kinematic filtering algorithm based on Unscented Kalman Filter and the kinematic model of human skeleton. The proposed algorithm can obtain smooth kinematic parameter with reduced noise compared to the kinematic parameter generated from the raw motion data from Kinect.

* 6 pages, Computer Vision for Accessible and Affordable HealthCare Workshop (ICCV2015) 
Viaarxiv icon

Computational Models for Multiview Dense Depth Maps of Dynamic Scene

Dec 15, 2015
Qifei Wang

Figure 1 for Computational Models for Multiview Dense Depth Maps of Dynamic Scene
Figure 2 for Computational Models for Multiview Dense Depth Maps of Dynamic Scene
Figure 3 for Computational Models for Multiview Dense Depth Maps of Dynamic Scene
Figure 4 for Computational Models for Multiview Dense Depth Maps of Dynamic Scene

This paper reviews the recent progresses of the depth map generation for dynamic scene and its corresponding computational models. This paper mainly covers the homogeneous ambiguity models in depth sensing, resolution models in depth processing, and consistency models in depth optimization. We also summarize the future work in the depth map generation.

* 4 pages, IEEE COMSOC MMTC E-Letter 2015 
Viaarxiv icon

Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

Dec 13, 2015
Qifei Wang, Gregorij Kurillo, Ferda Ofli, Ruzena Bajcsy

Figure 1 for Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect
Figure 2 for Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect
Figure 3 for Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect
Figure 4 for Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect

Microsoft Kinect camera and its skeletal tracking capabilities have been embraced by many researchers and commercial developers in various applications of real-time human movement analysis. In this paper, we evaluate the accuracy of the human kinematic motion data in the first and second generation of the Kinect system, and compare the results with an optical motion capture system. We collected motion data in 12 exercises for 10 different subjects and from three different viewpoints. We report on the accuracy of the joint localization and bone length estimation of Kinect skeletons in comparison to the motion capture. We also analyze the distribution of the joint localization offsets by fitting a mixture of Gaussian and uniform distribution models to determine the outliers in the Kinect motion data. Our analysis shows that overall Kinect 2 has more robust and more accurate tracking of human pose as compared to Kinect 1.

* 10 pages, IEEE International Conference on Healthcare Informatics 2015 (ICHI 2015) 
Viaarxiv icon