Alert button
Picture for Junzhe Zhang

Junzhe Zhang

Alert button

DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields

Sep 08, 2023
Junzhe Zhang, Yushi Lan, Shuai Yang, Fangzhou Hong, Quan Wang, Chai Kiat Yeo, Ziwei Liu, Chen Change Loy

Figure 1 for DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields
Figure 2 for DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields
Figure 3 for DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields
Figure 4 for DeformToon3D: Deformable 3D Toonification from Neural Radiance Fields

In this paper, we address the challenging problem of 3D toonification, which involves transferring the style of an artistic domain onto a target 3D face with stylized geometry and texture. Although fine-tuning a pre-trained 3D GAN on the artistic domain can produce reasonable performance, this strategy has limitations in the 3D domain. In particular, fine-tuning can deteriorate the original GAN latent space, which affects subsequent semantic editing, and requires independent optimization and storage for each new style, limiting flexibility and efficient deployment. To overcome these challenges, we propose DeformToon3D, an effective toonification framework tailored for hierarchical 3D GAN. Our approach decomposes 3D toonification into subproblems of geometry and texture stylization to better preserve the original latent space. Specifically, we devise a novel StyleField that predicts conditional 3D deformation to align a real-space NeRF to the style space for geometry stylization. Thanks to the StyleField formulation, which already handles geometry stylization well, texture stylization can be achieved conveniently via adaptive style mixing that injects information of the artistic domain into the decoder of the pre-trained 3D GAN. Due to the unique design, our method enables flexible style degree control and shape-texture-specific style swap. Furthermore, we achieve efficient training without any real-world 2D-3D training pairs but proxy samples synthesized from off-the-shelf 2D toonification models.

* ICCV 2023. Code: https://github.com/junzhezhang/DeformToon3D Project page: https://www.mmlab-ntu.com/project/deformtoon3d/ 
Viaarxiv icon

Variational Relational Point Completion Network for Robust 3D Classification

Apr 18, 2023
Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai Yi, Ziwei Liu

Figure 1 for Variational Relational Point Completion Network for Robust 3D Classification
Figure 2 for Variational Relational Point Completion Network for Robust 3D Classification
Figure 3 for Variational Relational Point Completion Network for Robust 3D Classification
Figure 4 for Variational Relational Point Completion Network for Robust 3D Classification

Real-scanned point clouds are often incomplete due to viewpoint, occlusion, and noise, which hampers 3D geometric modeling and perception. Existing point cloud completion methods tend to generate global shape skeletons and hence lack fine local details. Furthermore, they mostly learn a deterministic partial-to-complete mapping, but overlook structural relations in man-made objects. To tackle these challenges, this paper proposes a variational framework, Variational Relational point Completion Network (VRCNet) with two appealing properties: 1) Probabilistic Modeling. In particular, we propose a dual-path architecture to enable principled probabilistic modeling across partial and complete clouds. One path consumes complete point clouds for reconstruction by learning a point VAE. The other path generates complete shapes for partial point clouds, whose embedded distribution is guided by distribution obtained from the reconstruction path during training. 2) Relational Enhancement. Specifically, we carefully design point self-attention kernel and point selective kernel module to exploit relational point features, which refines local shape details conditioned on the coarse completion. In addition, we contribute multi-view partial point cloud datasets (MVP and MVP-40 dataset) containing over 200,000 high-quality scans, which render partial 3D shapes from 26 uniformly distributed camera poses for each 3D CAD model. Extensive experiments demonstrate that VRCNet outperforms state-of-the-art methods on all standard point cloud completion benchmarks. Notably, VRCNet shows great generalizability and robustness on real-world point cloud scans. Moreover, we can achieve robust 3D classification for partial point clouds with the help of VRCNet, which can highly increase classification accuracy.

* 12 pages, 10 figures, accepted by PAMI. project webpage: https://mvp-dataset.github.io/. arXiv admin note: substantial text overlap with arXiv:2104.10154 
Viaarxiv icon

Generative Diffusion Prior for Unified Image Restoration and Enhancement

Apr 03, 2023
Ben Fei, Zhaoyang Lyu, Liang Pan, Junzhe Zhang, Weidong Yang, Tianyue Luo, Bo Zhang, Bo Dai

Figure 1 for Generative Diffusion Prior for Unified Image Restoration and Enhancement
Figure 2 for Generative Diffusion Prior for Unified Image Restoration and Enhancement
Figure 3 for Generative Diffusion Prior for Unified Image Restoration and Enhancement
Figure 4 for Generative Diffusion Prior for Unified Image Restoration and Enhancement

Existing image restoration methods mostly leverage the posterior distribution of natural images. However, they often assume known degradation and also require supervised training, which restricts their adaptation to complex real applications. In this work, we propose the Generative Diffusion Prior (GDP) to effectively model the posterior distributions in an unsupervised sampling manner. GDP utilizes a pre-train denoising diffusion generative model (DDPM) for solving linear inverse, non-linear, or blind problems. Specifically, GDP systematically explores a protocol of conditional guidance, which is verified more practical than the commonly used guidance way. Furthermore, GDP is strength at optimizing the parameters of degradation model during the denoising process, achieving blind image restoration. Besides, we devise hierarchical guidance and patch-based methods, enabling the GDP to generate images of arbitrary resolutions. Experimentally, we demonstrate GDP's versatility on several image datasets for linear problems, such as super-resolution, deblurring, inpainting, and colorization, as well as non-linear and blind issues, such as low-light enhancement and HDR image recovery. GDP outperforms the current leading unsupervised methods on the diverse benchmarks in reconstruction quality and perceptual quality. Moreover, GDP also generalizes well for natural images or synthesized images with arbitrary sizes from various tasks out of the distribution of the ImageNet training set.

* 46 pages, 38 figures, accepted by CVPR2023 
Viaarxiv icon

ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing

Sep 30, 2022
Daxuan Ren, Jianmin Zheng, Jianfei Cai, Jiatong Li, Junzhe Zhang

Figure 1 for ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing
Figure 2 for ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing
Figure 3 for ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing
Figure 4 for ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing

Sketch-and-extrude is a common and intuitive modeling process in computer aided design. This paper studies the problem of learning the shape given in the form of point clouds by inverse sketch-and-extrude. We present ExtrudeNet, an unsupervised end-to-end network for discovering sketch and extrude from point clouds. Behind ExtrudeNet are two new technical components: 1) an effective representation for sketch and extrude, which can model extrusion with freeform sketches and conventional cylinder and box primitives as well; and 2) a numerical method for computing the signed distance field which is used in the network learning. This is the first attempt that uses machine learning to reverse engineer the sketch-and-extrude modeling process of a shape in an unsupervised fashion. ExtrudeNet not only outputs a compact, editable and interpretable representation of the shape that can be seamlessly integrated into modern CAD software, but also aligns with the standard CAD modeling process facilitating various editing applications, which distinguishes our work from existing shape parsing research. Code is released at https://github.com/kimren227/ExtrudeNet.

* Accepted to ECCV 2022 
Viaarxiv icon

CARNet:Compression Artifact Reduction for Point Cloud Attribute

Sep 17, 2022
Dandan Ding, Junzhe Zhang, Jianqiang Wang, Zhan Ma

Figure 1 for CARNet:Compression Artifact Reduction for Point Cloud Attribute
Figure 2 for CARNet:Compression Artifact Reduction for Point Cloud Attribute
Figure 3 for CARNet:Compression Artifact Reduction for Point Cloud Attribute
Figure 4 for CARNet:Compression Artifact Reduction for Point Cloud Attribute

A learning-based adaptive loop filter is developed for the Geometry-based Point Cloud Compression (G-PCC) standard to reduce attribute compression artifacts. The proposed method first generates multiple Most-Probable Sample Offsets (MPSOs) as potential compression distortion approximations, and then linearly weights them for artifact mitigation. As such, we drive the filtered reconstruction as close to the uncompressed PCA as possible. To this end, we devise a Compression Artifact Reduction Network (CARNet) which consists of two consecutive processing phases: MPSOs derivation and MPSOs combination. The MPSOs derivation uses a two-stream network to model local neighborhood variations from direct spatial embedding and frequency-dependent embedding, where sparse convolutions are utilized to best aggregate information from sparsely and irregularly distributed points. The MPSOs combination is guided by the least square error metric to derive weighting coefficients on the fly to further capture content dynamics of input PCAs. The CARNet is implemented as an in-loop filtering tool of the GPCC, where those linear weighting coefficients are encapsulated into the bitstream with negligible bit rate overhead. Experimental results demonstrate significant improvement over the latest GPCC both subjectively and objectively.

* 13pages, 8figures 
Viaarxiv icon

Sequential Causal Imitation Learning with Unobserved Confounders

Aug 12, 2022
Daniel Kumor, Junzhe Zhang, Elias Bareinboim

Figure 1 for Sequential Causal Imitation Learning with Unobserved Confounders
Figure 2 for Sequential Causal Imitation Learning with Unobserved Confounders
Figure 3 for Sequential Causal Imitation Learning with Unobserved Confounders
Figure 4 for Sequential Causal Imitation Learning with Unobserved Confounders

"Monkey see monkey do" is an age-old adage, referring to na\"ive imitation without a deep understanding of a system's underlying mechanics. Indeed, if a demonstrator has access to information unavailable to the imitator (monkey), such as a different set of sensors, then no matter how perfectly the imitator models its perceived environment (See), attempting to reproduce the demonstrator's behavior (Do) can lead to poor outcomes. Imitation learning in the presence of a mismatch between demonstrator and imitator has been studied in the literature under the rubric of causal imitation learning (Zhang et al., 2020), but existing solutions are limited to single-stage decision-making. This paper investigates the problem of causal imitation learning in sequential settings, where the imitator must make multiple decisions per episode. We develop a graphical criterion that is necessary and sufficient for determining the feasibility of causal imitation, providing conditions when an imitator can match a demonstrator's performance despite differing capabilities. Finally, we provide an efficient algorithm for determining imitability and corroborate our theory with simulations.

Viaarxiv icon

Causal Imitation Learning with Unobserved Confounders

Aug 12, 2022
Junzhe Zhang, Daniel Kumor, Elias Bareinboim

Figure 1 for Causal Imitation Learning with Unobserved Confounders
Figure 2 for Causal Imitation Learning with Unobserved Confounders
Figure 3 for Causal Imitation Learning with Unobserved Confounders
Figure 4 for Causal Imitation Learning with Unobserved Confounders

One of the common ways children learn is by mimicking adults. Imitation learning focuses on learning policies with suitable performance from demonstrations generated by an expert, with an unspecified performance measure, and unobserved reward signal. Popular methods for imitation learning start by either directly mimicking the behavior policy of an expert (behavior cloning) or by learning a reward function that prioritizes observed expert trajectories (inverse reinforcement learning). However, these methods rely on the assumption that covariates used by the expert to determine her/his actions are fully observed. In this paper, we relax this assumption and study imitation learning when sensory inputs of the learner and the expert differ. First, we provide a non-parametric, graphical criterion that is complete (both necessary and sufficient) for determining the feasibility of imitation from the combinations of demonstration data and qualitative assumptions about the underlying environment, represented in the form of a causal model. We then show that when such a criterion does not hold, imitation could still be feasible by exploiting quantitative knowledge of the expert trajectories. Finally, we develop an efficient procedure for learning the imitating policy from experts' trajectories.

Viaarxiv icon

Monocular 3D Object Reconstruction with GAN Inversion

Jul 20, 2022
Junzhe Zhang, Daxuan Ren, Zhongang Cai, Chai Kiat Yeo, Bo Dai, Chen Change Loy

Figure 1 for Monocular 3D Object Reconstruction with GAN Inversion
Figure 2 for Monocular 3D Object Reconstruction with GAN Inversion
Figure 3 for Monocular 3D Object Reconstruction with GAN Inversion
Figure 4 for Monocular 3D Object Reconstruction with GAN Inversion

Recovering a textured 3D mesh from a monocular image is highly challenging, particularly for in-the-wild objects that lack 3D ground truths. In this work, we present MeshInversion, a novel framework to improve the reconstruction by exploiting the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis. Reconstruction is achieved by searching for a latent space in the 3D GAN that best resembles the target mesh in accordance with the single view observation. Since the pre-trained GAN encapsulates rich 3D semantics in terms of mesh geometry and texture, searching within the GAN manifold thus naturally regularizes the realness and fidelity of the reconstruction. Importantly, such regularization is directly applied in the 3D space, providing crucial guidance of mesh parts that are unobserved in the 2D space. Experiments on standard benchmarks show that our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts. Moreover, it generalizes well to meshes that are less commonly seen, such as the extended articulation of deformable objects. Code is released at https://github.com/junzhezhang/mesh-inversion

* ECCV 2022. Project page: https://www.mmlab-ntu.com/project/meshinversion/ 
Viaarxiv icon

Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion

Nov 24, 2021
Tong Wu, Liang Pan, Junzhe Zhang, Tai Wang, Ziwei Liu, Dahua Lin

Figure 1 for Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
Figure 2 for Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
Figure 3 for Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion
Figure 4 for Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion

Chamfer Distance (CD) and Earth Mover's Distance (EMD) are two broadly adopted metrics for measuring the similarity between two point sets. However, CD is usually insensitive to mismatched local density, and EMD is usually dominated by global distribution while overlooks the fidelity of detailed structures. Besides, their unbounded value range induces a heavy influence from the outliers. These defects prevent them from providing a consistent evaluation. To tackle these problems, we propose a new similarity measure named Density-aware Chamfer Distance (DCD). It is derived from CD and benefits from several desirable properties: 1) it can detect disparity of density distributions and is thus a more intensive measure of similarity compared to CD; 2) it is stricter with detailed structures and significantly more computationally efficient than EMD; 3) the bounded value range encourages a more stable and reasonable evaluation over the whole test set. We adopt DCD to evaluate the point cloud completion task, where experimental results show that DCD pays attention to both the overall structure and local geometric details and provides a more reliable evaluation even when CD and EMD contradict each other. We can also use DCD as the training loss, which outperforms the same model trained with CD loss on all three metrics. In addition, we propose a novel point discriminator module that estimates the priority for another guided down-sampling step, and it achieves noticeable improvements under DCD together with competitive results for both CD and EMD. We hope our work could pave the way for a more comprehensive and practical point cloud similarity evaluation. Our code will be available at: https://github.com/wutong16/Density_aware_Chamfer_Distance .

* Accepted to NeurIPS 2021 
Viaarxiv icon

Partial Counterfactual Identification from Observational and Experimental Data

Oct 12, 2021
Junzhe Zhang, Jin Tian, Elias Bareinboim

Figure 1 for Partial Counterfactual Identification from Observational and Experimental Data
Figure 2 for Partial Counterfactual Identification from Observational and Experimental Data
Figure 3 for Partial Counterfactual Identification from Observational and Experimental Data
Figure 4 for Partial Counterfactual Identification from Observational and Experimental Data

This paper investigates the problem of bounding counterfactual queries from an arbitrary collection of observational and experimental distributions and qualitative knowledge about the underlying data-generating model represented in the form of a causal diagram. We show that all counterfactual distributions in an arbitrary structural causal model (SCM) could be generated by a canonical family of SCMs with the same causal diagram where unobserved (exogenous) variables are discrete with a finite domain. Utilizing the canonical SCMs, we translate the problem of bounding counterfactuals into that of polynomial programming whose solution provides optimal bounds for the counterfactual query. Solving such polynomial programs is in general computationally expensive. We therefore develop effective Monte Carlo algorithms to approximate the optimal bounds from an arbitrary combination of observational and experimental data. Our algorithms are validated extensively on synthetic and real-world datasets.

Viaarxiv icon