One major problem of objective Image Quality Assessment (IQA) methods is the lack of linearity of their quality estimates with respect to scores expressed by human subjects. For this reason, usually IQA metrics undergo a calibration process based on subjective quality examples. However, example-based training makes generalization problematic, hampering result comparison across different applications and operative conditions. In this paper, new Full Reference (FR) techniques, providing estimates linearly correlated with human scores without using calibration are introduced. To reach this objective, these techniques are deeply rooted on principles and theoretical constraints. Restricting the interest on the IQA of the set of natural images, it is first recognized that application of estimation theory and psycho physical principles to images degraded by Gaussian blur leads to a so-called canonical IQA method, whose estimates are not only highly linearly correlated to subjective scores, but are also straightforwardly related to the Viewing Distance (VD). Then, it is shown that mainstream IQA methods can be reconducted to the canonical method applying a preliminary metric conversion based on a unique specimen image. The application of this scheme is then extended to a significant class of degraded images other than Gaussian blur, including noisy and compressed images. The resulting calibration-free FR IQA methods are suited for applications where comparability and interoperability across different imaging systems and on different VDs is a major requirement. A comparison of their statistical performance with respect to some conventional calibration prone methods is finally provided.
The perception of the blur due to accommodation failures, insufficient optical correction or imperfect image reproduction is a common source of visual discomfort, usually attributed to an anomalous and annoying distribution of the image spectrum in the spatial frequency domain. In the present paper, this discomfort is attributed to a loss of the localization accuracy of the observed patterns. It is assumed, as a starting perceptual principle, that the visual system is optimally adapted to pattern localization in a natural environment. Thus, since the best possible accuracy of the image patterns localization is indicated by the positional Fisher information, it is argued that the blur discomfort is highly correlated with a loss of this information. Following this concept, a receptive field functional model, tuned to common and stable features of natural scenes, is adopted to predict the visual discomfort. It is of a complex-valued operator, orientation-selective both in the space domain and in the spatial frequency domain. Starting from the case of Gaussian blur, the analysis is extended to a generic blur type by applying a positional Fisher information equivalence criterion. Out-of-focus blur and astigmatic blur are presented as significant examples. The validity of the proposed model is verified by comparing its predictions with subjective ratings of the quality loss of blurred natural images. The model fits linearly with the experiments reported in independent databases, based on different protocols and settings.
In this paper, a novel Full Reference method is proposed for image quality assessment, using the combination of two separate metrics to measure the perceptually distinct impact of detail losses and of spurious details. To this purpose, the gradient of the impaired image is locally decomposed as a predicted version of the original gradient, plus a gradient residual. It is assumed that the detail attenuation identifies the detail loss, whereas the gradient residuals describe the spurious details. It turns out that the perceptual impact of detail losses is roughly linear with the loss of the positional Fisher information, while the perceptual impact of the spurious details is roughly proportional to a logarithmic measure of the signal to residual ratio. The affine combination of these two metrics forms a new index strongly correlated with the empirical Differential Mean Opinion Score (DMOS) for a significant class of image impairments, as verified for three independent popular databases. The method allowed alignment and merging of DMOS data coming from these different databases to a common DMOS scale by affine transformations. Unexpectedly, the DMOS scale setting is possible by the analysis of a single image affected by additive noise.