Alert button
Picture for Troy Chinen

Troy Chinen

Alert button

An Unsupervised Information-Theoretic Perceptual Quality Metric

Jun 11, 2020
Sangnie Bhardwaj, Ian Fischer, Johannes Ballé, Troy Chinen

Figure 1 for An Unsupervised Information-Theoretic Perceptual Quality Metric
Figure 2 for An Unsupervised Information-Theoretic Perceptual Quality Metric
Figure 3 for An Unsupervised Information-Theoretic Perceptual Quality Metric
Figure 4 for An Unsupervised Information-Theoretic Perceptual Quality Metric

Tractable models of human perception have proved to be challenging to build. Hand-designed models such as MS-SSIM remain popular predictors of human image quality judgements due to their simplicity and speed. Recent modern deep learning approaches can perform better, but they rely on supervised data which can be costly to gather: large sets of class labels such as ImageNet, image quality ratings, or both. We combine recent advances in information-theoretic objective functions with a computational architecture informed by the physiology of the human visual system and unsupervised training on pairs of video frames, yielding our Perceptual Information Metric (PIM). We show that PIM is competitive with supervised metrics on the recent and challenging BAPPS image quality assessment dataset. We also perform qualitative experiments using the ImageNet-C dataset, and establish that our approach is robust with respect to architectural details.

* Submitted to the 34th Conference on Neural Information Processing Systems (NeurIPS 2020) 
Viaarxiv icon

Towards a Semantic Perceptual Image Metric

Aug 01, 2018
Troy Chinen, Johannes Ballé, Chunhui Gu, Sung Jin Hwang, Sergey Ioffe, Nick Johnston, Thomas Leung, David Minnen, Sean O'Malley, Charles Rosenberg, George Toderici

Figure 1 for Towards a Semantic Perceptual Image Metric
Figure 2 for Towards a Semantic Perceptual Image Metric
Figure 3 for Towards a Semantic Perceptual Image Metric
Figure 4 for Towards a Semantic Perceptual Image Metric

We present a full reference, perceptual image metric based on VGG-16, an artificial neural network trained on object classification. We fit the metric to a new database based on 140k unique images annotated with ground truth by human raters who received minimal instruction. The resulting metric shows competitive performance on TID 2013, a database widely used to assess image quality assessments methods. More interestingly, it shows strong responses to objects potentially carrying semantic relevance such as faces and text, which we demonstrate using a visualization technique and ablation experiments. In effect, the metric appears to model a higher influence of semantic context on judgments, which we observe particularly in untrained raters. As the vast majority of users of image processing systems are unfamiliar with Image Quality Assessment (IQA) tasks, these findings may have significant impact on real-world applications of perceptual metrics.

Viaarxiv icon

Spatially adaptive image compression using a tiled deep network

Feb 07, 2018
David Minnen, George Toderici, Michele Covell, Troy Chinen, Nick Johnston, Joel Shor, Sung Jin Hwang, Damien Vincent, Saurabh Singh

Figure 1 for Spatially adaptive image compression using a tiled deep network
Figure 2 for Spatially adaptive image compression using a tiled deep network
Figure 3 for Spatially adaptive image compression using a tiled deep network
Figure 4 for Spatially adaptive image compression using a tiled deep network

Deep neural networks represent a powerful class of function approximators that can learn to compress and reconstruct images. Existing image compression algorithms based on neural networks learn quantized representations with a constant spatial bit rate across each image. While entropy coding introduces some spatial variation, traditional codecs have benefited significantly by explicitly adapting the bit rate based on local image complexity and visual saliency. This paper introduces an algorithm that combines deep neural networks with quality-sensitive bit rate adaptation using a tiled network. We demonstrate the importance of spatial context prediction and show improved quantitative (PSNR) and qualitative (subjective rater assessment) results compared to a non-adaptive baseline and a recently published image compression model based on fully-convolutional neural networks.

* International Conference on Image Processing 2017  
Viaarxiv icon

Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

Mar 29, 2017
Nick Johnston, Damien Vincent, David Minnen, Michele Covell, Saurabh Singh, Troy Chinen, Sung Jin Hwang, Joel Shor, George Toderici

Figure 1 for Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Figure 2 for Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Figure 3 for Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
Figure 4 for Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that lead to this state-of-the-art result. First, we show that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to several metrics. Second, we modify the recurrent architecture to improve spatial diffusion, which allows the network to more effectively capture and propagate image information through the network's hidden state. Finally, in addition to lossless entropy coding, we use a spatially adaptive bit allocation algorithm to more efficiently use the limited number of bits to encode visually complex image regions. We evaluate our method on the Kodak and Tecnick image sets and compare against standard codecs as well recently published methods based on deep neural networks.

Viaarxiv icon