Alert button
Picture for Qian He

Qian He

Alert button

UGC: Unified GAN Compression for Efficient Image-to-Image Translation

Sep 17, 2023
Yuxi Ren, Jie Wu, Peng Zhang, Manlin Zhang, Xuefeng Xiao, Qian He, Rui Wang, Min Zheng, Xin Pan

Figure 1 for UGC: Unified GAN Compression for Efficient Image-to-Image Translation
Figure 2 for UGC: Unified GAN Compression for Efficient Image-to-Image Translation
Figure 3 for UGC: Unified GAN Compression for Efficient Image-to-Image Translation
Figure 4 for UGC: Unified GAN Compression for Efficient Image-to-Image Translation

Recent years have witnessed the prevailing progress of Generative Adversarial Networks (GANs) in image-to-image translation. However, the success of these GAN models hinges on ponderous computational costs and labor-expensive training data. Current efficient GAN learning techniques often fall into two orthogonal aspects: i) model slimming via reduced calculation costs; ii)data/label-efficient learning with fewer training data/labels. To combine the best of both worlds, we propose a new learning paradigm, Unified GAN Compression (UGC), with a unified optimization objective to seamlessly prompt the synergy of model-efficient and label-efficient learning. UGC sets up semi-supervised-driven network architecture search and adaptive online semi-supervised distillation stages sequentially, which formulates a heterogeneous mutual learning scheme to obtain an architecture-flexible, label-efficient, and performance-excellent model.

Viaarxiv icon

GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images

Aug 07, 2023
Tianxiang Ma, Bingchuan Li, Qian He, Jing Dong, Tieniu Tan

Figure 1 for GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images
Figure 2 for GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images
Figure 3 for GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images
Figure 4 for GaFET: Learning Geometry-aware Facial Expression Translation from In-The-Wild Images

While current face animation methods can manipulate expressions individually, they suffer from several limitations. The expressions manipulated by some motion-based facial reenactment models are crude. Other ideas modeled with facial action units cannot generalize to arbitrary expressions not covered by annotations. In this paper, we introduce a novel Geometry-aware Facial Expression Translation (GaFET) framework, which is based on parametric 3D facial representations and can stably decoupled expression. Among them, a Multi-level Feature Aligned Transformer is proposed to complement non-geometric facial detail features while addressing the alignment challenge of spatial features. Further, we design a De-expression model based on StyleGAN, in order to reduce the learning difficulty of GaFET in unpaired "in-the-wild" images. Extensive qualitative and quantitative experiments demonstrate that we achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures. Besides, videos or annotated training data are omitted, making our method easier to use and generalize.

* Accepted by ICCV2023 
Viaarxiv icon

DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation

Jul 01, 2023
Zhuowei Chen, Shancheng Fang, Wei Liu, Qian He, Mengqi Huang, Yongdong Zhang, Zhendong Mao

Figure 1 for DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation
Figure 2 for DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation
Figure 3 for DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation
Figure 4 for DreamIdentity: Improved Editability for Efficient Face-identity Preserved Image Generation

While large-scale pre-trained text-to-image models can synthesize diverse and high-quality human-centric images, an intractable problem is how to preserve the face identity for conditioned face images. Existing methods either require time-consuming optimization for each face-identity or learning an efficient encoder at the cost of harming the editability of models. In this work, we present an optimization-free method for each face identity, meanwhile keeping the editability for text-to-image models. Specifically, we propose a novel face-identity encoder to learn an accurate representation of human faces, which applies multi-scale face features followed by a multi-embedding projector to directly generate the pseudo words in the text embedding space. Besides, we propose self-augmented editability learning to enhance the editability of models, which is achieved by constructing paired generated face and edited face images using celebrity names, aiming at transferring mature ability of off-the-shelf text-to-image models in celebrity faces to unseen faces. Extensive experiments show that our methods can generate identity-preserved images under different scenes at a much faster speed.

* Project page: https://dreamidentity.github.io/ 
Viaarxiv icon

Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation

Feb 05, 2023
Shiqi Sun, Shancheng Fang, Qian He, Wei Liu

Figure 1 for Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation
Figure 2 for Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation
Figure 3 for Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation
Figure 4 for Design Booster: A Text-Guided Diffusion Model for Image Translation with Spatial Layout Preservation

Diffusion models are able to generate photorealistic images in arbitrary scenes. However, when applying diffusion models to image translation, there exists a trade-off between maintaining spatial structure and high-quality content. Besides, existing methods are mainly based on test-time optimization or fine-tuning model for each input image, which are extremely time-consuming for practical applications. To address these issues, we propose a new approach for flexible image translation by learning a layout-aware image condition together with a text condition. Specifically, our method co-encodes images and text into a new domain during the training phase. In the inference stage, we can choose images/text or both as the conditions for each time step, which gives users more flexible control over layout and content. Experimental comparisons of our method with state-of-the-art methods demonstrate our model performs best in both style image translation and semantic image translation and took the shortest time.

Viaarxiv icon

Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field

Feb 03, 2023
Tianxiang Ma, Bingchuan Li, Qian He, Jing Dong, Tieniu Tan

Figure 1 for Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field
Figure 2 for Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field
Figure 3 for Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field
Figure 4 for Semantic 3D-aware Portrait Synthesis and Manipulation Based on Compositional Neural Radiance Field

Recently 3D-aware GAN methods with neural radiance field have developed rapidly. However, current methods model the whole image as an overall neural radiance field, which limits the partial semantic editability of synthetic results. Since NeRF renders an image pixel by pixel, it is possible to split NeRF in the spatial dimension. We propose a Compositional Neural Radiance Field (CNeRF) for semantic 3D-aware portrait synthesis and manipulation. CNeRF divides the image by semantic regions and learns an independent neural radiance field for each region, and finally fuses them and renders the complete image. Thus we can manipulate the synthesized semantic regions independently, while fixing the other parts unchanged. Furthermore, CNeRF is also designed to decouple shape and texture within each semantic region. Compared to state-of-the-art 3D-aware GAN methods, our approach enables fine-grained semantic region manipulation, while maintaining high-quality 3D-consistent synthesis. The ablation studies show the effectiveness of the structure and loss function used by our method. In addition real image inversion and cartoon portrait 3D editing experiments demonstrate the application potential of our method.

* Accepted by AAAI2023 
Viaarxiv icon

ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing

Jan 31, 2023
Bingchuan Li, Tianxiang Ma, Peng Zhang, Miao Hua, Wei Liu, Qian He, Zili Yi

Figure 1 for ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing
Figure 2 for ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing
Figure 3 for ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing
Figure 4 for ReGANIE: Rectifying GAN Inversion Errors for Accurate Real Image Editing

The StyleGAN family succeed in high-fidelity image generation and allow for flexible and plausible editing of generated images by manipulating the semantic-rich latent style space.However, projecting a real image into its latent space encounters an inherent trade-off between inversion quality and editability. Existing encoder-based or optimization-based StyleGAN inversion methods attempt to mitigate the trade-off but see limited performance. To fundamentally resolve this problem, we propose a novel two-phase framework by designating two separate networks to tackle editing and reconstruction respectively, instead of balancing the two. Specifically, in Phase I, a W-space-oriented StyleGAN inversion network is trained and used to perform image inversion and editing, which assures the editability but sacrifices reconstruction quality. In Phase II, a carefully designed rectifying network is utilized to rectify the inversion errors and perform ideal reconstruction. Experimental results show that our approach yields near-perfect reconstructions without sacrificing the editability, thus allowing accurate manipulation of real images. Further, we evaluate the performance of our rectifying network, and see great generalizability towards unseen manipulation types and out-of-domain images.

Viaarxiv icon

HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

Jan 08, 2023
Bin Tang, Zhengyi Liu, Yacheng Tan, Qian He

Figure 1 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 2 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 3 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
Figure 4 for HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection

The High-Resolution Transformer (HRFormer) can maintain high-resolution representation and share global receptive fields. It is friendly towards salient object detection (SOD) in which the input and output have the same resolution. However, two critical problems need to be solved for two-modality SOD. One problem is two-modality fusion. The other problem is the HRFormer output's fusion. To address the first problem, a supplementary modality is injected into the primary modality by using global optimization and an attention mechanism to select and purify the modality at the input level. To solve the second problem, a dual-direction short connection fusion module is used to optimize the output features of HRFormer, thereby enhancing the detailed representation of objects at the output level. The proposed model, named HRTransNet, first introduces an auxiliary stream for feature extraction of supplementary modality. Then, features are injected into the primary modality at the beginning of each multi-resolution branch. Next, HRFormer is applied to achieve forwarding propagation. Finally, all the output features with different resolutions are aggregated by intra-feature and inter-feature interactive transformers. Application of the proposed model results in impressive improvement for driving two-modality SOD tasks, e.g., RGB-D, RGB-T, and light field SOD.https://github.com/liuzywen/HRTransNet

* TCSVT2022  
Viaarxiv icon

HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping

Dec 13, 2022
Qinghe Wang, Lijie Liu, Miao Hua, Qian He, Pengfei Zhu, Bing Cao, Qinghua Hu

Figure 1 for HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping
Figure 2 for HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping
Figure 3 for HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping
Figure 4 for HS-Diffusion: Learning a Semantic-Guided Diffusion Model for Head Swapping

Image-based head swapping task aims to stitch a source head to another source body flawlessly. This seldom-studied task faces two major challenges: 1) Preserving the head and body from various sources while generating a seamless transition region. 2) No paired head swapping dataset and benchmark so far. In this paper, we propose an image-based head swapping framework (HS-Diffusion) which consists of a semantic-guided latent diffusion model (SG-LDM) and a semantic layout generator. We blend the semantic layouts of source head and source body, and then inpaint the transition region by the semantic layout generator, achieving a coarse-grained head swapping. SG-LDM can further implement fine-grained head swapping with the blended layout as condition by a progressive fusion process, while preserving source head and source body with high-quality reconstruction. To this end, we design a head-cover augmentation strategy for training and a neck alignment trick for geometric realism. Importantly, we construct a new image-based head swapping benchmark and propose two tailor-designed metrics (Mask-FID and Focal-FID). Extensive experiments demonstrate the superiority of our framework. The code will be available: https://github.com/qinghew/HS-Diffusion.

Viaarxiv icon

An Integrated Constrained Gradient Descent (iCGD) Protocol to Correct Scan-Positional Errors for Electron Ptychography with High Accuracy and Precision

Nov 06, 2022
Shoucong Ning, Wenhui Xu, Leyi Loh, Zhen Lu, Michel Bosman, Fucai Zhang, Qian He

Figure 1 for An Integrated Constrained Gradient Descent (iCGD) Protocol to Correct Scan-Positional Errors for Electron Ptychography with High Accuracy and Precision
Figure 2 for An Integrated Constrained Gradient Descent (iCGD) Protocol to Correct Scan-Positional Errors for Electron Ptychography with High Accuracy and Precision
Figure 3 for An Integrated Constrained Gradient Descent (iCGD) Protocol to Correct Scan-Positional Errors for Electron Ptychography with High Accuracy and Precision
Figure 4 for An Integrated Constrained Gradient Descent (iCGD) Protocol to Correct Scan-Positional Errors for Electron Ptychography with High Accuracy and Precision

Correcting scan-positional errors is critical in achieving electron ptychography with both high resolution and high precision. This is a demanding and challenging task due to the sheer number of parameters that need to be optimized. For atomic-resolution ptychographic reconstructions, we found classical refining methods for scan positions not satisfactory due to the inherent entanglement between the object and scan positions, which can produce systematic errors in the results. Here, we propose a new protocol consisting of a series of constrained gradient descent (CGD) methods to achieve better recovery of scan positions. The central idea of these CGD methods is to utilize a priori knowledge about the nature of STEM experiments and add necessary constraints to isolate different types of scan positional errors during the iterative reconstruction process. Each constraint will be introduced with the help of simulated 4D-STEM datasets with known positional errors. Then the integrated constrained gradient decent (iCGD) protocol will be demonstrated using an experimental 4D-STEM dataset of the 1H-MoS2 monolayer. We will show that the iCGD protocol can effectively address the errors of scan positions across the spectrum and help to achieve electron ptychography with high accuracy and precision.

Viaarxiv icon

Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition

Aug 19, 2022
Tailin Chen, Desen Zhou, Jian Wang, Shidong Wang, Qian He, Chuanyang Hu, Errui Ding, Yu Guan, Xuming He

Figure 1 for Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition
Figure 2 for Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition
Figure 3 for Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition
Figure 4 for Part-aware Prototypical Graph Network for One-shot Skeleton-based Action Recognition

In this paper, we study the problem of one-shot skeleton-based action recognition, which poses unique challenges in learning transferable representation from base classes to novel classes, particularly for fine-grained actions. Existing meta-learning frameworks typically rely on the body-level representations in spatial dimension, which limits the generalisation to capture subtle visual differences in the fine-grained label space. To overcome the above limitation, we propose a part-aware prototypical representation for one-shot skeleton-based action recognition. Our method captures skeleton motion patterns at two distinctive spatial levels, one for global contexts among all body joints, referred to as body level, and the other attends to local spatial regions of body parts, referred to as the part level. We also devise a class-agnostic attention mechanism to highlight important parts for each action class. Specifically, we develop a part-aware prototypical graph network consisting of three modules: a cascaded embedding module for our dual-level modelling, an attention-based part fusion module to fuse parts and generate part-aware prototypes, and a matching module to perform classification with the part-aware representations. We demonstrate the effectiveness of our method on two public skeleton-based action recognition datasets: NTU RGB+D 120 and NW-UCLA.

* one-shot, action recognition, skeleton, part-aware, graph 
Viaarxiv icon