Alert button
Picture for Zhuo Su

Zhuo Su

Alert button

Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

Aug 17, 2023
Xiaozheng Zheng, Zhuo Su, Chao Wen, Zhou Xue, Xiaojie Jin

Figure 1 for Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
Figure 2 for Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
Figure 3 for Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
Figure 4 for Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

To bridge the physical and virtual worlds for rapidly developed VR/AR applications, the ability to realistically drive 3D full-body avatars is of great significance. Although real-time body tracking with only the head-mounted displays (HMDs) and hand controllers is heavily under-constrained, a carefully designed end-to-end neural network is of great potential to solve the problem by learning from large-scale motion data. To this end, we propose a two-stage framework that can obtain accurate and smooth full-body motions with the three tracking signals of head and hands only. Our framework explicitly models the joint-level features in the first stage and utilizes them as spatiotemporal tokens for alternating spatial and temporal transformer blocks to capture joint-level correlations in the second stage. Furthermore, we design a set of loss terms to constrain the task of a high degree of freedom, such that we can exploit the potential of our joint-level modeling. With extensive experiments on the AMASS motion dataset and real-captured data, we validate the effectiveness of our designs and show our proposed method can achieve more accurate and smooth motion compared to existing approaches.

* Accepted to ICCV 2023. Project page: https://zxz267.github.io/AvatarJLM 
Viaarxiv icon

Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution

Apr 13, 2023
Zhuo Su, Jiehua Zhang, Tianpeng Liu, Zhen Liu, Shuanghui Zhang, Matti Pietikäinen, Li Liu

Figure 1 for Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Figure 2 for Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Figure 3 for Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution
Figure 4 for Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution

This paper proposes a novel module called middle spectrum grouped convolution (MSGC) for efficient deep convolutional neural networks (DCNNs) with the mechanism of grouped convolution. It explores the broad "middle spectrum" area between channel pruning and conventional grouped convolution. Compared with channel pruning, MSGC can retain most of the information from the input feature maps due to the group mechanism; compared with grouped convolution, MSGC benefits from the learnability, the core of channel pruning, for constructing its group topology, leading to better channel division. The middle spectrum area is unfolded along four dimensions: group-wise, layer-wise, sample-wise, and attention-wise, making it possible to reveal more powerful and interpretable structures. As a result, the proposed module acts as a booster that can reduce the computational cost of the host backbones for general image recognition with even improved predictive accuracy. For example, in the experiments on ImageNet dataset for image classification, MSGC can reduce the multiply-accumulates (MACs) of ResNet-18 and ResNet-50 by half but still increase the Top-1 accuracy by more than 1%. With 35% reduction of MACs, MSGC can also increase the Top-1 accuracy of the MobileNetV2 backbone. Results on MS COCO dataset for object detection show similar observations. Our code and trained models are available at https://github.com/hellozhuo/msgc.

* 13 pages, 11 figures, submitted to IEEEE Transactions on xxx 
Viaarxiv icon

Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions from Monocular RGBD Stream

Apr 06, 2023
Yuheng Jiang, Kaixin Yao, Zhuo Su, Zhehao Shen, Haimin Luo, Lan Xu

Figure 1 for Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions from Monocular RGBD Stream
Figure 2 for Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions from Monocular RGBD Stream
Figure 3 for Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions from Monocular RGBD Stream
Figure 4 for Instant-NVR: Instant Neural Volumetric Rendering for Human-object Interactions from Monocular RGBD Stream

Convenient 4D modeling of human-object interactions is essential for numerous applications. However, monocular tracking and rendering of complex interaction scenarios remain challenging. In this paper, we propose Instant-NVR, a neural approach for instant volumetric human-object tracking and rendering using a single RGBD camera. It bridges traditional non-rigid tracking with recent instant radiance field techniques via a multi-thread tracking-rendering mechanism. In the tracking front-end, we adopt a robust human-object capture scheme to provide sufficient motion priors. We further introduce a separated instant neural representation with a novel hybrid deformation module for the interacting scene. We also provide an on-the-fly reconstruction scheme of the dynamic/static radiance fields via efficient motion-prior searching. Moreover, we introduce an online key frame selection scheme and a rendering-aware refinement strategy to significantly improve the appearance details for online novel-view synthesis. Extensive experiments demonstrate the effectiveness and efficiency of our approach for the instant generation of human-object radiance fields on the fly, notably achieving real-time photo-realistic novel view synthesis under complex human-object interactions.

* CVPR 2023 
Viaarxiv icon

From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning

Mar 15, 2023
Zhuo Su, Matti Pietikäinen, Li Liu

Figure 1 for From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning
Figure 2 for From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning
Figure 3 for From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning
Figure 4 for From Local Binary Patterns to Pixel Difference Networks for Efficient Visual Representation Learning

LBP is a successful hand-crafted feature descriptor in computer vision. However, in the deep learning era, deep neural networks, especially convolutional neural networks (CNNs) can automatically learn powerful task-aware features that are more discriminative and of higher representational capacity. To some extent, such hand-crafted features can be safely ignored when designing deep computer vision models. Nevertheless, due to LBP's preferable properties in visual representation learning, an interesting topic has arisen to explore the value of LBP in enhancing modern deep models in terms of efficiency, memory consumption, and predictive performance. In this paper, we provide a comprehensive review on such efforts which aims to incorporate the LBP mechanism into the design of CNN modules to make deep models stronger. In retrospect of what has been achieved so far, the paper discusses open challenges and directions for future research.

* A small survey paper on Local Binary Pattern (LBP) inspired Deep Learning networks, accepted in SCIA 2023 (Scandinavian Conference on Image Analysis) 
Viaarxiv icon

Boosting Binary Neural Networks via Dynamic Thresholds Learning

Nov 04, 2022
Jiehua Zhang, Xueyang Zhang, Zhuo Su, Zitong Yu, Yanghe Feng, Xin Lu, Matti Pietikäinen, Li Liu

Figure 1 for Boosting Binary Neural Networks via Dynamic Thresholds Learning
Figure 2 for Boosting Binary Neural Networks via Dynamic Thresholds Learning
Figure 3 for Boosting Binary Neural Networks via Dynamic Thresholds Learning
Figure 4 for Boosting Binary Neural Networks via Dynamic Thresholds Learning

Developing lightweight Deep Convolutional Neural Networks (DCNNs) and Vision Transformers (ViTs) has become one of the focuses in vision research since the low computational cost is essential for deploying vision models on edge devices. Recently, researchers have explored highly computational efficient Binary Neural Networks (BNNs) by binarizing weights and activations of Full-precision Neural Networks. However, the binarization process leads to an enormous accuracy gap between BNN and its full-precision version. One of the primary reasons is that the Sign function with predefined or learned static thresholds limits the representation capacity of binarized architectures since single-threshold binarization fails to utilize activation distributions. To overcome this issue, we introduce the statistics of channel information into explicit thresholds learning for the Sign Function dubbed DySign to generate various thresholds based on input distribution. Our DySign is a straightforward method to reduce information loss and boost the representative capacity of BNNs, which can be flexibly applied to both DCNNs and ViTs (i.e., DyBCNN and DyBinaryCCT) to achieve promising performance improvement. As shown in our extensive experiments. For DCNNs, DyBCNNs based on two backbones (MobileNetV1 and ResNet18) achieve 71.2% and 67.4% top1-accuracy on ImageNet dataset, outperforming baselines by a large margin (i.e., 1.8% and 1.5% respectively). For ViTs, DyBinaryCCT presents the superiority of the convolutional embedding layer in fully binarized ViTs and achieves 56.1% on the ImageNet dataset, which is nearly 9% higher than the baseline.

Viaarxiv icon

Learning Variational Motion Prior for Video-based Motion Capture

Oct 28, 2022
Xin Chen, Zhuo Su, Lingbo Yang, Pei Cheng, Lan Xu, Bin Fu, Gang Yu

Figure 1 for Learning Variational Motion Prior for Video-based Motion Capture
Figure 2 for Learning Variational Motion Prior for Video-based Motion Capture
Figure 3 for Learning Variational Motion Prior for Video-based Motion Capture
Figure 4 for Learning Variational Motion Prior for Video-based Motion Capture

Motion capture from a monocular video is fundamental and crucial for us humans to naturally experience and interact with each other in Virtual Reality (VR) and Augmented Reality (AR). However, existing methods still struggle with challenging cases involving self-occlusion and complex poses due to the lack of effective motion prior modeling. In this paper, we present a novel variational motion prior (VMP) learning approach for video-based motion capture to resolve the above issue. Instead of directly building the correspondence between the video and motion domain, We propose to learn a generic latent space for capturing the prior distribution of all natural motions, which serve as the basis for subsequent video-based motion capture tasks. To improve the generalization capacity of prior space, we propose a transformer-based variational autoencoder pretrained over marker-based 3D mocap data, with a novel style-mapping block to boost the generation quality. Afterward, a separate video encoder is attached to the pretrained motion generator for end-to-end fine-tuning over task-specific video datasets. Compared to existing motion prior models, our VMP model serves as a motion rectifier that can effectively reduce temporal jittering and failure modes in frame-wise pose estimation, leading to temporally stable and visually realistic motion capture results. Furthermore, our VMP-based framework models motion at sequence level and can directly generate motion clips in the forward pass, achieving real-time motion capture during inference. Extensive experiments over both public datasets and in-the-wild videos have demonstrated the efficacy and generalization capability of our framework.

* 9 pages, 9 figures 
Viaarxiv icon

SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation

Sep 20, 2022
Zhuo Su, Max Welling, Matti Pietikäinen, Li Liu

Figure 1 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 2 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 3 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation
Figure 4 for SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud Representation

Efficiency and robustness are increasingly needed for applications on 3D point clouds, with the ubiquitous use of edge devices in scenarios like autonomous driving and robotics, which often demand real-time and reliable responses. The paper tackles the challenge by designing a general framework to construct 3D learning architectures with SO(3) equivariance and network binarization. However, a naive combination of equivariant networks and binarization either causes sub-optimal computational efficiency or geometric ambiguity. We propose to locate both scalar and vector features in our networks to avoid both cases. Precisely, the presence of scalar features makes the major part of the network binarizable, while vector features serve to retain rich structural information and ensure SO(3) equivariance. The proposed approach can be applied to general backbones like PointNet and DGCNN. Meanwhile, experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation robustness, and accuracy. The codes are available at https://github.com/zhuoinoulu/svnet.

* Accepted in 3DV 2022. 11 pages including the appendix 
Viaarxiv icon

Median Pixel Difference Convolutional Network for Robust Face Recognition

May 30, 2022
Jiehua Zhang, Zhuo Su, Li Liu

Figure 1 for Median Pixel Difference Convolutional Network for Robust Face Recognition
Figure 2 for Median Pixel Difference Convolutional Network for Robust Face Recognition
Figure 3 for Median Pixel Difference Convolutional Network for Robust Face Recognition
Figure 4 for Median Pixel Difference Convolutional Network for Robust Face Recognition

Face recognition is one of the most active tasks in computer vision and has been widely used in the real world. With great advances made in convolutional neural networks (CNN), lots of face recognition algorithms have achieved high accuracy on various face datasets. However, existing face recognition algorithms based on CNNs are vulnerable to noise. Noise corrupted image patterns could lead to false activations, significantly decreasing face recognition accuracy in noisy situations. To equip CNNs with built-in robustness to noise of different levels, we proposed a Median Pixel Difference Convolutional Network (MeDiNet) by replacing some traditional convolutional layers with the proposed novel Median Pixel Difference Convolutional Layer (MeDiConv) layer. The proposed MeDiNet integrates the idea of traditional multiscale median filtering with deep CNNs. The MeDiNet is tested on the four face datasets (LFW, CA-LFW, CP-LFW, and YTF) with versatile settings on blur kernels, noise intensities, scales, and JPEG quality factors. Extensive experiments show that our MeDiNet can effectively remove noisy pixels in the feature map and suppress the negative impact of noise, leading to achieving limited accuracy loss under these practical noises compared with the standard CNN under clean conditions.

* Accepted by BMVC2021 
Viaarxiv icon

NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions

Mar 28, 2022
Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu

Figure 1 for NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 2 for NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 3 for NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 4 for NeuralHOFusion: Neural Volumetric Rendering under Human-object Interactions

4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralHOFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.

Viaarxiv icon

NeuralFusion: Neural Volumetric Rendering under Human-object Interactions

Feb 28, 2022
Yuheng Jiang, Suyi Jiang, Guoxing Sun, Zhuo Su, Kaiwen Guo, Minye Wu, Jingyi Yu, Lan Xu

Figure 1 for NeuralFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 2 for NeuralFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 3 for NeuralFusion: Neural Volumetric Rendering under Human-object Interactions
Figure 4 for NeuralFusion: Neural Volumetric Rendering under Human-object Interactions

4D modeling of human-object interactions is critical for numerous applications. However, efficient volumetric capture and rendering of complex interaction scenarios, especially from sparse inputs, remain challenging. In this paper, we propose NeuralFusion, a neural approach for volumetric human-object capture and rendering using sparse consumer RGBD sensors. It marries traditional non-rigid fusion with recent neural implicit modeling and blending advances, where the captured humans and objects are layerwise disentangled. For geometry modeling, we propose a neural implicit inference scheme with non-rigid key-volume fusion, as well as a template-aid robust object tracking pipeline. Our scheme enables detailed and complete geometry generation under complex interactions and occlusions. Moreover, we introduce a layer-wise human-object texture rendering scheme, which combines volumetric and image-based rendering in both spatial and temporal domains to obtain photo-realistic results. Extensive experiments demonstrate the effectiveness and efficiency of our approach in synthesizing photo-realistic free-view results under complex human-object interactions.

Viaarxiv icon