Alert button
Picture for Tianjia Shao

Tianjia Shao

Alert button

Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions

Nov 27, 2023
Keyang Ye, Tianjia Shao, Kun Zhou

We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time. Compared to existing NeRF-based methods, the model owns better capability in synthesizing high-frequency details without the jittering problem across video frames. The core of our model is a novel augmented 3D Gaussian representation, which attaches each Gaussian with a learnable code. The learnable code serves as a pose-dependent appearance embedding for refining the erroneous appearance caused by geometric transformation of Gaussians, based on which an appearance refinement model is learned to produce residual Gaussian properties to match the appearance in target pose. To force the Gaussians to learn the foreground human only without background interference, we further design a novel alpha loss to explicitly constrain the Gaussians within the human body. We also propose to jointly optimize the human joint parameters to improve the appearance accuracy. The animatable 3D Gaussian model can be learned with shallow MLPs, so new human motions can be synthesized in real time (66 fps on avarage). Experiments show that our model has superior performance over NeRF-based methods.

* Some experiment data is wrong. The expression of the paper in introduction and abstract is incorrect. Some graphs have inappropriate descriptions 
Viaarxiv icon

PIE-NeRF: Physics-based Interactive Elastodynamics with NeRF

Nov 22, 2023
Yutao Feng, Yintong Shang, Xuan Li, Tianjia Shao, Chenfanfu Jiang, Yin Yang

We show that physics-based simulations can be seamlessly integrated with NeRF to generate high-quality elastodynamics of real-world objects. Unlike existing methods, we discretize nonlinear hyperelasticity in a meshless way, obviating the necessity for intermediate auxiliary shape proxies like a tetrahedral mesh or voxel grid. A quadratic generalized moving least square (Q-GMLS) is employed to capture nonlinear dynamics and large deformation on the implicit model. Such meshless integration enables versatile simulations of complex and codimensional shapes. We adaptively place the least-square kernels according to the NeRF density field to significantly reduce the complexity of the nonlinear simulation. As a result, physically realistic animations can be conveniently synthesized using our method for a wide range of hyperelastic materials at an interactive rate. For more information, please visit our project page at https://fytalon.github.io/pienerf/.

Viaarxiv icon

A Locality-based Neural Solver for Optical Motion Capture

Sep 04, 2023
Xiaoyu Pan, Bowen Zheng, Xinwei Jiang, Guanglong Xu, Xianli Gu, Jingxiang Li, Qilong Kou, He Wang, Tianjia Shao, Kun Zhou, Xiaogang Jin

Figure 1 for A Locality-based Neural Solver for Optical Motion Capture
Figure 2 for A Locality-based Neural Solver for Optical Motion Capture
Figure 3 for A Locality-based Neural Solver for Optical Motion Capture
Figure 4 for A Locality-based Neural Solver for Optical Motion Capture

We present a novel locality-based learning method for cleaning and solving optical motion capture data. Given noisy marker data, we propose a new heterogeneous graph neural network which treats markers and joints as different types of nodes, and uses graph convolution operations to extract the local features of markers and joints and transform them to clean motions. To deal with anomaly markers (e.g. occluded or with big tracking errors), the key insight is that a marker's motion shows strong correlations with the motions of its immediate neighboring markers but less so with other markers, a.k.a. locality, which enables us to efficiently fill missing markers (e.g. due to occlusion). Additionally, we also identify marker outliers due to tracking errors by investigating their acceleration profiles. Finally, we propose a training regime based on representation learning and data augmentation, by training the model on data with masking. The masking schemes aim to mimic the occluded and noisy markers often observed in the real data. Finally, we show that our method achieves high accuracy on multiple metrics across various datasets. Extensive comparison shows our method outperforms state-of-the-art methods in terms of prediction accuracy of occluded marker position error by approximately 20%, which leads to a further error reduction on the reconstructed joint rotations and positions by 30%. The code and data for this paper are available at https://github.com/non-void/LocalMoCap.

* Siggraph Asia 2023 Conference Paper 
Viaarxiv icon

Adaptive Local Basis Functions for Shape Completion

Jul 17, 2023
Hui Ying, Tianjia Shao, He Wang, Yin Yang, Kun Zhou

Figure 1 for Adaptive Local Basis Functions for Shape Completion
Figure 2 for Adaptive Local Basis Functions for Shape Completion
Figure 3 for Adaptive Local Basis Functions for Shape Completion
Figure 4 for Adaptive Local Basis Functions for Shape Completion

In this paper, we focus on the task of 3D shape completion from partial point clouds using deep implicit functions. Existing methods seek to use voxelized basis functions or the ones from a certain family of functions (e.g., Gaussians), which leads to high computational costs or limited shape expressivity. On the contrary, our method employs adaptive local basis functions, which are learned end-to-end and not restricted in certain forms. Based on those basis functions, a local-to-local shape completion framework is presented. Our algorithm learns sparse parameterization with a small number of basis functions while preserving local geometric details during completion. Quantitative and qualitative experiments demonstrate that our method outperforms the state-of-the-art methods in shape completion, detail preservation, generalization to unseen geometries, and computational cost. Code and data are at https://github.com/yinghdb/Adaptive-Local-Basis-Functions.

* In SIGGRAPH 2023 
Viaarxiv icon

Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack

Nov 21, 2022
Yunfeng Diao, He Wang, Tianjia Shao, Yong-Liang Yang, Kun Zhou, David Hogg

Figure 1 for Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Figure 2 for Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Figure 3 for Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack
Figure 4 for Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack

Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars, where safety and lives are at stake. Recently, the robustness of existing skeleton-based HAR methods has been questioned due to their vulnerability to adversarial attacks, which causes concerns considering the scale of the implication. However, the proposed attacks require the full-knowledge of the attacked classifier, which is overly restrictive. In this paper, we show such threats indeed exist, even when the attacker only has access to the input/output of the model. To this end, we propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR. BASAR explores the interplay between the classification boundary and the natural motion manifold. To our best knowledge, this is the first time data manifold is introduced in adversarial attacks on time series. Via BASAR, we find on-manifold adversarial samples are extremely deceitful and rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation, we show that BASAR can deliver successful attacks across classifiers, datasets, and attack modes. By attack, BASAR helps identify the potential causes of the model vulnerability and provides insights on possible improvements. Finally, to mitigate the newly identified threat, we propose a new adversarial training approach by leveraging the sophisticated distributions of on/off-manifold adversarial samples, called mixed manifold-based adversarial training (MMAT). MMAT can successfully help defend against adversarial attacks without compromising classification accuracy.

* arXiv admin note: substantial text overlap with arXiv:2103.05266 
Viaarxiv icon

Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks

May 06, 2022
Xiaoyu Pan, Jiaming Mai, Xinwei Jiang, Dongxue Tang, Jingxiang Li, Tianjia Shao, Kun Zhou, Xiaogang Jin, Dinesh Manocha

Figure 1 for Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks
Figure 2 for Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks
Figure 3 for Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks
Figure 4 for Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks

We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates. Given a garment, we generate a simulation database and extract virtual bones from simulated mesh sequences using skin decomposition. At runtime, we separately compute low- and high-frequency deformations in a sequential manner. The low-frequency deformations are predicted by transferring body motions to virtual bones' motions, and the high-frequency deformations are estimated leveraging the global information of virtual bones' motions and local information extracted from low-frequency meshes. In addition, our method can estimate garment deformations caused by variations of the simulation parameters (e.g., fabric's bending stiffness) using an RBF kernel ensembling trained networks for different sets of simulation parameters. Through extensive comparisons, we show that our method outperforms state-of-the-art methods in terms of prediction accuracy of mesh deformations by about 20% in RMSE and 10% in Hausdorff distance and STED. The code and data are available at https://github.com/non-void/VirtualBones.

* SIGGRAPH 22 Conference Paper 
Viaarxiv icon

Pose Guided Image Generation from Misaligned Sources via Residual Flow Based Correction

Feb 02, 2022
Jiawei Lu, He Wang, Tianjia Shao, Yin Yang, Kun Zhou

Figure 1 for Pose Guided Image Generation from Misaligned Sources via Residual Flow Based Correction
Figure 2 for Pose Guided Image Generation from Misaligned Sources via Residual Flow Based Correction
Figure 3 for Pose Guided Image Generation from Misaligned Sources via Residual Flow Based Correction
Figure 4 for Pose Guided Image Generation from Misaligned Sources via Residual Flow Based Correction

Generating new images with desired properties (e.g. new view/poses) from source images has been enthusiastically pursued recently, due to its wide range of potential applications. One way to ensure high-quality generation is to use multiple sources with complementary information such as different views of the same object. However, as source images are often misaligned due to the large disparities among the camera settings, strong assumptions have been made in the past with respect to the camera(s) or/and the object in interest, limiting the application of such techniques. Therefore, we propose a new general approach which models multiple types of variations among sources, such as view angles, poses, facial expressions, in a unified framework, so that it can be employed on datasets of vastly different nature. We verify our approach on a variety of data including humans bodies, faces, city scenes and 3D objects. Both the qualitative and quantitative results demonstrate the better performance of our method than the state of the art.

Viaarxiv icon

Unsupervised Image Generation with Infinite Generative Adversarial Networks

Aug 18, 2021
Hui Ying, He Wang, Tianjia Shao, Yin Yang, Kun Zhou

Figure 1 for Unsupervised Image Generation with Infinite Generative Adversarial Networks
Figure 2 for Unsupervised Image Generation with Infinite Generative Adversarial Networks
Figure 3 for Unsupervised Image Generation with Infinite Generative Adversarial Networks
Figure 4 for Unsupervised Image Generation with Infinite Generative Adversarial Networks

Image generation has been heavily investigated in computer vision, where one core research challenge is to generate images from arbitrarily complex distributions with little supervision. Generative Adversarial Networks (GANs) as an implicit approach have achieved great successes in this direction and therefore been employed widely. However, GANs are known to suffer from issues such as mode collapse, non-structured latent space, being unable to compute likelihoods, etc. In this paper, we propose a new unsupervised non-parametric method named mixture of infinite conditional GANs or MIC-GANs, to tackle several GAN issues together, aiming for image generation with parsimonious prior knowledge. Through comprehensive evaluations across different datasets, we show that MIC-GANs are effective in structuring the latent space and avoiding mode collapse, and outperform state-of-the-art methods. MICGANs are adaptive, versatile, and robust. They offer a promising solution to several well-known GAN issues. Code available: github.com/yinghdb/MICGANs.

* 18 pages, 11 figures 
Viaarxiv icon

BASAR:Black-box Attack on Skeletal Action Recognition

Mar 19, 2021
Yunfeng Diao, Tianjia Shao, Yong-Liang Yang, Kun Zhou, He Wang

Figure 1 for BASAR:Black-box Attack on Skeletal Action Recognition
Figure 2 for BASAR:Black-box Attack on Skeletal Action Recognition
Figure 3 for BASAR:Black-box Attack on Skeletal Action Recognition
Figure 4 for BASAR:Black-box Attack on Skeletal Action Recognition

Skeletal motion plays a vital role in human activity recognition as either an independent data source or a complement. The robustness of skeleton-based activity recognizers has been questioned recently, which shows that they are vulnerable to adversarial attacks when the full-knowledge of the recognizer is accessible to the attacker. However, this white-box requirement is overly restrictive in most scenarios and the attack is not truly threatening. In this paper, we show that such threats do exist under black-box settings too. To this end, we propose the first black-box adversarial attack method BASAR. Through BASAR, we show that adversarial attack is not only truly a threat but also can be extremely deceitful, because on-manifold adversarial samples are rather common in skeletal motions, in contrast to the common belief that adversarial samples only exist off-manifold. Through exhaustive evaluation and comparison, we show that BASAR can deliver successful attacks across models, data, and attack modes. Through harsh perceptual studies, we show that it achieves effective yet imperceptible attacks. By analyzing the attack on different activity recognizers, BASAR helps identify the potential causes of their vulnerability and provides insights on what classifiers are likely to be more robust against attack.

* Accepted in CVPR 2021 
Viaarxiv icon