Alert button
Picture for Jiancheng Huang

Jiancheng Huang

Alert button

GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via Blender-Oriented GPT Planning

Nov 21, 2023
Jiaxi Lv, Yi Huang, Mingfu Yan, Jiancheng Huang, Jianzhuang Liu, Yifan Liu, Yafei Wen, Xiaoxin Chen, Shifeng Chen

Recent advances in text-to-video generation have harnessed the power of diffusion models to create visually compelling content conditioned on text prompts. However, they usually encounter high computational costs and often struggle to produce videos with coherent physical motions. To tackle these issues, we propose GPT4Motion, a training-free framework that leverages the planning capability of large language models such as GPT, the physical simulation strength of Blender, and the excellent image generation ability of text-to-image diffusion models to enhance the quality of video synthesis. Specifically, GPT4Motion employs GPT-4 to generate a Blender script based on a user textual prompt, which commands Blender's built-in physics engine to craft fundamental scene components that encapsulate coherent physical motions across frames. Then these components are inputted into Stable Diffusion to generate a video aligned with the textual prompt. Experimental results on three basic physical motion scenarios, including rigid object drop and collision, cloth draping and swinging, and liquid flow, demonstrate that GPT4Motion can generate high-quality videos efficiently in maintaining motion coherency and entity consistency. GPT4Motion offers new insights in text-to-video research, enhancing its quality and broadening its horizon for future explorations.

Viaarxiv icon

Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement

Oct 02, 2023
Jiancheng Huang, Yifan Liu, Shifeng Chen

Figure 1 for Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement
Figure 2 for Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement
Figure 3 for Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement
Figure 4 for Bootstrap Diffusion Model Curve Estimation for High Resolution Low-Light Image Enhancement

Learning-based methods have attracted a lot of research attention and led to significant improvements in low-light image enhancement. However, most of them still suffer from two main problems: expensive computational cost in high resolution images and unsatisfactory performance in simultaneous enhancement and denoising. To address these problems, we propose BDCE, a bootstrap diffusion model that exploits the learning of the distribution of the curve parameters instead of the normal-light image itself. Specifically, we adopt the curve estimation method to handle the high-resolution images, where the curve parameters are estimated by our bootstrap diffusion model. In addition, a denoise module is applied in each iteration of curve adjustment to denoise the intermediate enhanced result of each iteration. We evaluate BDCE on commonly used benchmark datasets, and extensive experiments show that it achieves state-of-the-art qualitative and quantitative performance.

Viaarxiv icon

Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation

Oct 01, 2023
Jiancheng Huang, Yifan Liu, Yi Huang, Shifeng Chen

Figure 1 for Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation
Figure 2 for Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation
Figure 3 for Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation
Figure 4 for Seal2Real: Prompt Prior Learning on Diffusion Model for Unsupervised Document Seal Data Generation and Realisation

In document processing, seal-related tasks have very large commercial applications, such as seal segmentation, seal authenticity discrimination, seal removal, and text recognition under seals. However, these seal-related tasks are highly dependent on labelled document seal datasets, resulting in very little work on these tasks. To address the lack of labelled datasets for these seal-related tasks, we propose Seal2Real, a generative method that generates a large amount of labelled document seal data, and construct a Seal-DB dataset containing 20K images with labels. In Seal2Real, we propose a prompt prior learning architecture based on a pre-trained Stable Diffusion Model that migrates the prior generative power of to our seal generation task with unsupervised training. The realistic seal generation capability greatly facilitates the performance of downstream seal-related tasks on real data. Experimental results on the Seal-DB dataset demonstrate the effectiveness of Seal2Real.

Viaarxiv icon

IFAST: Weakly Supervised Interpretable Face Anti-spoofing from Single-shot Binocular NIR Images

Sep 29, 2023
Jiancheng Huang, Donghao Zhou, Shifeng Chen

Figure 1 for IFAST: Weakly Supervised Interpretable Face Anti-spoofing from Single-shot Binocular NIR Images
Figure 2 for IFAST: Weakly Supervised Interpretable Face Anti-spoofing from Single-shot Binocular NIR Images
Figure 3 for IFAST: Weakly Supervised Interpretable Face Anti-spoofing from Single-shot Binocular NIR Images
Figure 4 for IFAST: Weakly Supervised Interpretable Face Anti-spoofing from Single-shot Binocular NIR Images

Single-shot face anti-spoofing (FAS) is a key technique for securing face recognition systems, and it requires only static images as input. However, single-shot FAS remains a challenging and under-explored problem due to two main reasons: 1) on the data side, learning FAS from RGB images is largely context-dependent, and single-shot images without additional annotations contain limited semantic information. 2) on the model side, existing single-shot FAS models are infeasible to provide proper evidence for their decisions, and FAS methods based on depth estimation require expensive per-pixel annotations. To address these issues, a large binocular NIR image dataset (BNI-FAS) is constructed and published, which contains more than 300,000 real face and plane attack images, and an Interpretable FAS Transformer (IFAST) is proposed that requires only weak supervision to produce interpretable predictions. Our IFAST can produce pixel-wise disparity maps by the proposed disparity estimation Transformer with Dynamic Matching Attention (DMA) block. Besides, a well-designed confidence map generator is adopted to cooperate with the proposed dual-teacher distillation module to obtain the final discriminant results. The comprehensive experiments show that our IFAST can achieve state-of-the-art results on BNI-FAS, proving the effectiveness of the single-shot FAS based on binocular NIR images.

Viaarxiv icon

KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing

Sep 28, 2023
Jiancheng Huang, Yifan Liu, Jin Qin, Shifeng Chen

Figure 1 for KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing
Figure 2 for KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing
Figure 3 for KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing
Figure 4 for KV Inversion: KV Embeddings Learning for Text-Conditioned Real Image Action Editing

Text-conditioned image editing is a recently emerged and highly practical task, and its potential is immeasurable. However, most of the concurrent methods are unable to perform action editing, i.e. they can not produce results that conform to the action semantics of the editing prompt and preserve the content of the original image. To solve the problem of action editing, we propose KV Inversion, a method that can achieve satisfactory reconstruction performance and action editing, which can solve two major problems: 1) the edited result can match the corresponding action, and 2) the edited object can retain the texture and identity of the original real image. In addition, our method does not require training the Stable Diffusion model itself, nor does it require scanning a large-scale dataset to perform time-consuming training.

Viaarxiv icon

FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing

Sep 26, 2023
Songyan Chen, Jiancheng Huang

Figure 1 for FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing
Figure 2 for FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing
Figure 3 for FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing
Figure 4 for FEC: Three Finetuning-free Methods to Enhance Consistency for Real Image Editing

Text-conditional image editing is a very useful task that has recently emerged with immeasurable potential. Most current real image editing methods first need to complete the reconstruction of the image, and then editing is carried out by various methods based on the reconstruction. Most methods use DDIM Inversion for reconstruction, however, DDIM Inversion often fails to guarantee reconstruction performance, i.e., it fails to produce results that preserve the original image content. To address the problem of reconstruction failure, we propose FEC, which consists of three sampling methods, each designed for different editing types and settings. Our three methods of FEC achieve two important goals in image editing task: 1) ensuring successful reconstruction, i.e., sampling to get a generated result that preserves the texture and features of the original real image. 2) these sampling methods can be paired with many editing methods and greatly improve the performance of these editing methods to accomplish various editing tasks. In addition, none of our sampling methods require fine-tuning of the diffusion model or time-consuming training on large-scale datasets. Hence the cost of time as well as the use of computer memory and computation can be significantly reduced.

Viaarxiv icon

Graph Edit Distance Learning via Different Attention

Aug 26, 2023
Jiaxi Lv, Liang Zhang, Yi Huang, Jiancheng Huang, Shifeng Chen

Recently, more and more research has focused on using Graph Neural Networks (GNN) to solve the Graph Similarity Computation problem (GSC), i.e., computing the Graph Edit Distance (GED) between two graphs. These methods treat GSC as an end-to-end learnable task, and the core of their architecture is the feature fusion modules to interact with the features of two graphs. Existing methods consider that graph-level embedding is difficult to capture the differences in local small structures between two graphs, and thus perform fine-grained feature fusion on node-level embedding can improve the accuracy, but leads to greater time and memory consumption in the training and inference phases. However, this paper proposes a novel graph-level fusion module Different Attention (DiffAtt), and demonstrates that graph-level fusion embeddings can substantially outperform these complex node-level fusion embeddings. We posit that the relative difference structure of the two graphs plays an important role in calculating their GED values. To this end, DiffAtt uses the difference between two graph-level embeddings as an attentional mechanism to capture the graph structural difference of the two graphs. Based on DiffAtt, a new GSC method, named Graph Edit Distance Learning via Different Attention (REDRAFT), is proposed, and experimental results demonstrate that REDRAFT achieves state-of-the-art performance in 23 out of 25 metrics in five benchmark datasets. Especially on MSE, it respectively outperforms the second best by 19.9%, 48.8%, 29.1%, 31.6%, and 2.2%. Moreover, we propose a quantitative test Remaining Subgraph Alignment Test (RESAT) to verify that among all graph-level fusion modules, the fusion embedding generated by DiffAtt can best capture the structural differences between two graphs.

Viaarxiv icon

Distribution-Aware Calibration for Object Detection with Noisy Bounding Boxes

Aug 23, 2023
Donghao Zhou, Jialin Li, Jinpeng Li, Jiancheng Huang, Qiang Nie, Yong Liu, Bin-Bin Gao, Qiong Wang, Pheng-Ann Heng, Guangyong Chen

Figure 1 for Distribution-Aware Calibration for Object Detection with Noisy Bounding Boxes
Figure 2 for Distribution-Aware Calibration for Object Detection with Noisy Bounding Boxes
Figure 3 for Distribution-Aware Calibration for Object Detection with Noisy Bounding Boxes
Figure 4 for Distribution-Aware Calibration for Object Detection with Noisy Bounding Boxes

Large-scale well-annotated datasets are of great importance for training an effective object detector. However, obtaining accurate bounding box annotations is laborious and demanding. Unfortunately, the resultant noisy bounding boxes could cause corrupt supervision signals and thus diminish detection performance. Motivated by the observation that the real ground-truth is usually situated in the aggregation region of the proposals assigned to a noisy ground-truth, we propose DIStribution-aware CalibratiOn (DISCO) to model the spatial distribution of proposals for calibrating supervision signals. In DISCO, spatial distribution modeling is performed to statistically extract the potential locations of objects. Based on the modeled distribution, three distribution-aware techniques, i.e., distribution-aware proposal augmentation (DA-Aug), distribution-aware box refinement (DA-Ref), and distribution-aware confidence estimation (DA-Est), are developed to improve classification, localization, and interpretability, respectively. Extensive experiments on large-scale noisy image datasets (i.e., Pascal VOC and MS-COCO) demonstrate that DISCO can achieve state-of-the-art detection performance, especially at high noise levels.

* 12 pages, 9 figures 
Viaarxiv icon

WaveDM: Wavelet-Based Diffusion Models for Image Restoration

May 23, 2023
Yi Huang, Jiancheng Huang, Jianzhuang Liu, Yu Dong, Jiaxi Lv, Shifeng Chen

Figure 1 for WaveDM: Wavelet-Based Diffusion Models for Image Restoration
Figure 2 for WaveDM: Wavelet-Based Diffusion Models for Image Restoration
Figure 3 for WaveDM: Wavelet-Based Diffusion Models for Image Restoration
Figure 4 for WaveDM: Wavelet-Based Diffusion Models for Image Restoration

Latest diffusion-based methods for many image restoration tasks outperform traditional models, but they encounter the long-time inference problem. To tackle it, this paper proposes a Wavelet-Based Diffusion Model (WaveDM) with an Efficient Conditional Sampling (ECS) strategy. WaveDM learns the distribution of clean images in the wavelet domain conditioned on the wavelet spectrum of degraded images after wavelet transform, which is more time-saving in each step of sampling than modeling in the spatial domain. In addition, ECS follows the same procedure as the deterministic implicit sampling in the initial sampling period and then stops to predict clean images directly, which reduces the number of total sampling steps to around 5. Evaluations on four benchmark datasets including image raindrop removal, defocus deblurring, demoir\'eing, and denoising demonstrate that WaveDM achieves state-of-the-art performance with the efficiency that is comparable to traditional one-pass methods and over 100 times faster than existing image restoration methods using vanilla diffusion models.

Viaarxiv icon