Alert button
Picture for Zhengjie Xu

Zhengjie Xu

Alert button

Perceptual Artifacts Localization for Image Synthesis Tasks

Oct 09, 2023
Lingzhi Zhang, Zhengjie Xu, Connelly Barnes, Yuqian Zhou, Qing Liu, He Zhang, Sohrab Amirghodsi, Zhe Lin, Eli Shechtman, Jianbo Shi

Figure 1 for Perceptual Artifacts Localization for Image Synthesis Tasks
Figure 2 for Perceptual Artifacts Localization for Image Synthesis Tasks
Figure 3 for Perceptual Artifacts Localization for Image Synthesis Tasks
Figure 4 for Perceptual Artifacts Localization for Image Synthesis Tasks

Recent advancements in deep generative models have facilitated the creation of photo-realistic images across various tasks. However, these generated images often exhibit perceptual artifacts in specific regions, necessitating manual correction. In this study, we present a comprehensive empirical examination of Perceptual Artifacts Localization (PAL) spanning diverse image synthesis endeavors. We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels across ten synthesis tasks. A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks. Additionally, we illustrate its proficiency in adapting to previously unseen models using minimal training samples. We further propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images. Through our experimental analyses, we elucidate several practical downstream applications, such as automated artifact rectification, non-referential image quality evaluation, and abnormal region detection in images. The dataset and code are released.

Viaarxiv icon

HashEncoding: Autoencoding with Multiscale Coordinate Hashing

Nov 29, 2022
Lukas Zhornyak, Zhengjie Xu, Haoran Tang, Jianbo Shi

Figure 1 for HashEncoding: Autoencoding with Multiscale Coordinate Hashing
Figure 2 for HashEncoding: Autoencoding with Multiscale Coordinate Hashing
Figure 3 for HashEncoding: Autoencoding with Multiscale Coordinate Hashing
Figure 4 for HashEncoding: Autoencoding with Multiscale Coordinate Hashing

We present HashEncoding, a novel autoencoding architecture that leverages a non-parametric multiscale coordinate hash function to facilitate a per-pixel decoder without convolutions. By leveraging the space-folding behaviour of hashing functions, HashEncoding allows for an inherently multiscale embedding space that remains much smaller than the original image. As a result, the decoder requires very few parameters compared with decoders in traditional autoencoders, approaching a non-parametric reconstruction of the original image and allowing for greater generalizability. Finally, by allowing backpropagation directly to the coordinate space, we show that HashEncoding can be exploited for geometric tasks such as optical flow.

Viaarxiv icon