Novelty detection using deep generative models such as autoencoder, generative adversarial networks mostly takes image reconstruction error as novelty score function. However, image data, high dimensional as it is, contains a lot of different features other than class information which makes models hard to detect novelty data. The problem gets harder in multi-modal normality case. To address this challenge, we propose a new way of measuring novelty score in multi-modal normality cases using orthogonalized latent space. Specifically, we employ orthogonal low-rank embedding in the latent space to disentangle the features in the latent space using mutual class information. With the orthogonalized latent space, novelty score is defined by the change of each latent vector. Proposed algorithm was compared to state-of-the-art novelty detection algorithms using GAN such as RaPP and OCGAN, and experimental results show that ours outperforms those algorithms.
We present a novel, blind, single image deblurring method that utilizes information regarding blur kernels. Our model solves the deblurring problem by dividing it into two successive tasks: (1) blur kernel estimation and (2) sharp image restoration. We first introduce a kernel estimation network that produces adaptive blur kernels based on the analysis of the blurred image. The network learns the blur pattern of the input image and trains to generate the estimation of image-specific blur kernels. Subsequently, we propose a long-term residual blending network that restores sharp images using the estimated blur kernel. To use the kernel efficiently, we propose a blending block that encodes features from both blurred images and blur kernels into a low dimensional space and then decodes them simultaneously to obtain an appropriately synthesized feature representation. We evaluate our model on REDS, GOPRO and Flickr2K datasets using various Gaussian blur kernels. Experiments show that our model can achieve excellent results on each dataset.