No-reference image quality assessment (NR-IQA) is a fundamental yet challenging task in low-level computer vision. It is to predict the perceptual quality of an image with unknown distortion. Its difficulty is particularly pronounced as the corresponding reference for assessment is typically absent. Various mechanisms to extract features ranging from natural scene statistics to deep features have been leveraged to boost the NR-IQA performance. However, these methods treat images of different degradations the same and the representations of distortions are under-exploited. Furthermore, identifying the distortion type should be an important part for NR-IQA, which is rarely addressed in the previous methods. In this work, we propose the domain-aware no-reference image quality assessment (DA-NR-IQA), which for the first time exploits and disentangles the distinct representation of different degradations to access image quality. Benefiting from the design of domain-aware architecture, our method can simultaneously identify the distortion type of an image. With both the by-product distortion type and quality score determined, the distortion in an image can be better characterized and the image quality can be more precisely assessed. Extensive experiments show that the proposed DA-NR-IQA performs better than almost all the other state-of-the-art methods.
Image-to-image translation has drawn great attention during the past few years. It aims to translate an image in one domain to a given reference image in another domain. Due to its effectiveness and efficiency, many applications can be formulated as image-to-image translation problems. However, three main challenges remain in image-to-image translation: 1) the lack of large amounts of aligned training pairs for different tasks; 2) the ambiguity of multiple possible outputs from a single input image; and 3) the lack of simultaneous training of multiple datasets from different domains within a single network. We also found in experiments that the implicit disentanglement of content and style could lead to unexpect results. In this paper, we propose a unified framework for learning to generate diverse outputs using unpaired training data and allow simultaneous training of multiple datasets from different domains via a single network. Furthermore, we also investigate how to better extract domain supervision information so as to learn better disentangled representations and achieve better image translation. Experiments show that the proposed method outperforms or is comparable with the state-of-the-art methods.
Image generation task has received increasing attention because of its wide application in security and entertainment. Sketch-based face generation brings more fun and better quality of image generation due to supervised interaction. However, When a sketch poorly aligned with the true face is given as input, existing supervised image-to-image translation methods often cannot generate acceptable photo-realistic face images. To address this problem, in this paper we propose Cali-Sketch, a poorly-drawn-sketch to photo-realistic-image generation method. Cali-Sketch explicitly models stroke calibration and image generation using two constituent networks: a Stroke Calibration Network (SCN), which calibrates strokes of facial features and enriches facial details while preserving the original intent features; and an Image Synthesis Network (ISN), which translates the calibrated and enriched sketches to photo-realistic face images. In this way, we manage to decouple a difficult cross-domain translation problem into two easier steps. Extensive experiments verify that the face photos generated by Cali-Sketch are both photo-realistic and faithful to the input sketches, compared with state-of-the-art methods