Alert button
Picture for Changqing Zou

Changqing Zou

Alert button

A General Implicit Framework for Fast NeRF Composition and Rendering

Aug 14, 2023
Xinyu Gao, Ziyi Yang, Yunlu Zhao, Yuxiang Sun, Xiaogang Jin, Changqing Zou

Figure 1 for A General Implicit Framework for Fast NeRF Composition and Rendering
Figure 2 for A General Implicit Framework for Fast NeRF Composition and Rendering
Figure 3 for A General Implicit Framework for Fast NeRF Composition and Rendering
Figure 4 for A General Implicit Framework for Fast NeRF Composition and Rendering

A variety of Neural Radiance Fields (NeRF) methods have recently achieved remarkable success in high render speed. However, current accelerating methods are specialized and incompatible with various implicit methods, preventing real-time composition over various types of NeRF works. Because NeRF relies on sampling along rays, it is possible to provide general guidance for acceleration. To that end, we propose a general implicit pipeline for composing NeRF objects quickly. Our method enables the casting of dynamic shadows within or between objects using analytical light sources while allowing multiple NeRF objects to be seamlessly placed and rendered together with any arbitrary rigid transformations. Mainly, our work introduces a new surface representation known as Neural Depth Fields (NeDF) that quickly determines the spatial relationship between objects by allowing direct intersection computation between rays and implicit surfaces. It leverages an intersection neural network to query NeRF for acceleration instead of depending on an explicit spatial structure.Our proposed method is the first to enable both the progressive and interactive composition of NeRF objects. Additionally, it also serves as a previewing plugin for a range of existing NeRF works.

* 7 pages for main content 
Viaarxiv icon

CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer

Mar 31, 2023
Linfeng Wen, Chengying Gao, Changqing Zou

Figure 1 for CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Figure 2 for CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Figure 3 for CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer
Figure 4 for CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer

Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer. This paper proposes a new framework named CAP-VSTNet, which consists of a new reversible residual network and an unbiased linear transform module, for versatile style transfer. This reversible residual network can not only preserve content affinity but not introduce redundant information as traditional reversible networks, and hence facilitate better stylization. Empowered by Matting Laplacian training loss which can address the pixel affinity loss problem led by the linear transform, the proposed framework is applicable and effective on versatile style transfer. Extensive experiments show that CAP-VSTNet can produce better qualitative and quantitative results in comparison with the state-of-the-art methods.

* CVPR 2023 
Viaarxiv icon

MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations

Mar 21, 2023
Ye Wang, Bowei Jiang, Changqing Zou, Rui Ma

Figure 1 for MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations
Figure 2 for MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations
Figure 3 for MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations
Figure 4 for MXM-CLR: A Unified Framework for Contrastive Learning of Multifold Cross-Modal Representations

Multifold observations are common for different data modalities, e.g., a 3D shape can be represented by multi-view images and an image can be described with different captions. Existing cross-modal contrastive representation learning (XM-CLR) methods such as CLIP are not fully suitable for multifold data as they only consider one positive pair and treat other pairs as negative when computing the contrastive loss. In this paper, we propose MXM-CLR, a unified framework for contrastive learning of multifold cross-modal representations. MXM-CLR explicitly models and learns the relationships between multifold observations of instances from different modalities for more comprehensive representation learning. The key of MXM-CLR is a novel multifold-aware hybrid loss which considers multiple positive observations when computing the hard and soft relationships for the cross-modal data pairs. We conduct quantitative and qualitative comparisons with SOTA baselines for cross-modal retrieval tasks on the Text2Shape and Flickr30K datasets. We also perform extensive evaluations on the adaptability and generalizability of MXM-CLR, as well as ablation studies on the loss design and effects of batch sizes. The results show the superiority of MXM-CLR in learning better representations for the multifold data. The code is available at https://github.com/JLU-ICL/MXM-CLR.

* 16 pages, 14 figures 
Viaarxiv icon

SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

Jun 06, 2021
Zeyu Ruan, Changqing Zou, Longhai Wu, Gangshan Wu, Limin Wang

Figure 1 for SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
Figure 2 for SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
Figure 3 for SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction
Figure 4 for SADRNet: Self-Aligned Dual Face Regression Networks for Robust 3D Dense Face Alignment and Reconstruction

Three-dimensional face dense alignment and reconstruction in the wild is a challenging problem as partial facial information is commonly missing in occluded and large pose face images. Large head pose variations also increase the solution space and make the modeling more difficult. Our key idea is to model occlusion and pose to decompose this challenging task into several relatively more manageable subtasks. To this end, we propose an end-to-end framework, termed as Self-aligned Dual face Regression Network (SADRNet), which predicts a pose-dependent face, a pose-independent face. They are combined by an occlusion-aware self-alignment to generate the final 3D face. Extensive experiments on two popular benchmarks, AFLW2000-3D and Florence, demonstrate that the proposed method achieves significant superior performance over existing state-of-the-art methods.

* To appear in IEEE Transactions on Image Processing. Code and model is available at https://github.com/MCG-NJU/SADRNet 
Viaarxiv icon

View-Guided Point Cloud Completion

Apr 13, 2021
Xuancheng Zhang, Yutong Feng, Siqi Li, Changqing Zou, Hai Wan, Xibin Zhao, Yandong Guo, Yue Gao

Figure 1 for View-Guided Point Cloud Completion
Figure 2 for View-Guided Point Cloud Completion
Figure 3 for View-Guided Point Cloud Completion
Figure 4 for View-Guided Point Cloud Completion

This paper presents a view-guided solution for the task of point cloud completion. Unlike most existing methods directly inferring the missing points using shape priors, we address this task by introducing ViPC (view-guided point cloud completion) that takes the missing crucial global structure information from an extra single-view image. By leveraging a framework that sequentially performs effective cross-modality and cross-level fusions, our method achieves significantly superior results over typical existing solutions on a new large-scale dataset we collect for the view-guided point cloud completion task.

* 10 pages, 8 figures, CVPR2021 
Viaarxiv icon

Attention-based Multi-modal Fusion Network for Semantic Scene Completion

Apr 16, 2020
Siqi Li, Changqing Zou, Yipeng Li, Xibin Zhao, Yue Gao

Figure 1 for Attention-based Multi-modal Fusion Network for Semantic Scene Completion
Figure 2 for Attention-based Multi-modal Fusion Network for Semantic Scene Completion
Figure 3 for Attention-based Multi-modal Fusion Network for Semantic Scene Completion
Figure 4 for Attention-based Multi-modal Fusion Network for Semantic Scene Completion

This paper presents an end-to-end 3D convolutional network named attention-based multi-modal fusion network (AMFNet) for the semantic scene completion (SSC) task of inferring the occupancy and semantic labels of a volumetric 3D scene from single-view RGB-D images. Compared with previous methods which use only the semantic features extracted from RGB-D images, the proposed AMFNet learns to perform effective 3D scene completion and semantic segmentation simultaneously via leveraging the experience of inferring 2D semantic segmentation from RGB-D images as well as the reliable depth cues in spatial dimension. It is achieved by employing a multi-modal fusion architecture boosted from 2D semantic segmentation and a 3D semantic completion network empowered by residual attention blocks. We validate our method on both the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset and the results show that our method respectively achieves the gains of 2.5% and 2.6% on the synthetic SUNCG-RGBD dataset and the real NYUv2 dataset against the state-of-the-art method.

* Accepted by AAAI 2020 
Viaarxiv icon

SketchyCOCO: Image Generation from Freehand Scene Sketches

Apr 07, 2020
Chengying Gao, Qi Liu, Qi Xu, Limin Wang, Jianzhuang Liu, Changqing Zou

Figure 1 for SketchyCOCO: Image Generation from Freehand Scene Sketches
Figure 2 for SketchyCOCO: Image Generation from Freehand Scene Sketches
Figure 3 for SketchyCOCO: Image Generation from Freehand Scene Sketches
Figure 4 for SketchyCOCO: Image Generation from Freehand Scene Sketches

We introduce the first method for automatic image generation from scene-level freehand sketches. Our model allows for controllable image generation by specifying the synthesis goal via freehand sketches. The key contribution is an attribute vector bridged Generative Adversarial Network called EdgeGAN, which supports high visual-quality object-level image content generation without using freehand sketches as training data. We have built a large-scale composite dataset called SketchyCOCO to support and evaluate the solution. We validate our approach on the tasks of both object-level and scene-level image generation on SketchyCOCO. Through quantitative, qualitative results, human evaluation and ablation studies, we demonstrate the method's capacity to generate realistic complex scene-level images from various freehand sketches.

Viaarxiv icon