Alert button

"Super Resolution": models, code, and papers
Alert button

LSwinSR: UAV Imagery Super-Resolution based on Linear Swin Transformer

Mar 17, 2023
Rui Li, Xiaowei Zhao

Figure 1 for LSwinSR: UAV Imagery Super-Resolution based on Linear Swin Transformer
Figure 2 for LSwinSR: UAV Imagery Super-Resolution based on Linear Swin Transformer
Figure 3 for LSwinSR: UAV Imagery Super-Resolution based on Linear Swin Transformer
Figure 4 for LSwinSR: UAV Imagery Super-Resolution based on Linear Swin Transformer

Super-resolution, which aims to reconstruct high-resolution images from low-resolution images, has drawn considerable attention and has been intensively studied in computer vision and remote sensing communities. The super-resolution technology is especially beneficial for Unmanned Aerial Vehicles (UAV), as the amount and resolution of images captured by UAV are highly limited by physical constraints such as flight altitude and load capacity. In the wake of the successful application of deep learning methods in the super-resolution task, in recent years, a series of super-resolution algorithms have been developed. In this paper, for the super-resolution of UAV images, a novel network based on the state-of-the-art Swin Transformer is proposed with better efficiency and competitive accuracy. Meanwhile, as one of the essential applications of the UAV is land cover and land use monitoring, simple image quality assessments such as the Peak-Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM) are not enough to comprehensively measure the performance of an algorithm. Therefore, we further investigate the effectiveness of super-resolution methods using the accuracy of semantic segmentation. The code will be available at https://github.com/lironui/LSwinSR.

Parameter-Free Channel Attention for Image Classification and Super-Resolution

Mar 20, 2023
Yuxuan Shi, Lingxiao Yang, Wangpeng An, Xiantong Zhen, Liuqing Wang

Figure 1 for Parameter-Free Channel Attention for Image Classification and Super-Resolution
Figure 2 for Parameter-Free Channel Attention for Image Classification and Super-Resolution
Figure 3 for Parameter-Free Channel Attention for Image Classification and Super-Resolution
Figure 4 for Parameter-Free Channel Attention for Image Classification and Super-Resolution

The channel attention mechanism is a useful technique widely employed in deep convolutional neural networks to boost the performance for image processing tasks, eg, image classification and image super-resolution. It is usually designed as a parameterized sub-network and embedded into the convolutional layers of the network to learn more powerful feature representations. However, current channel attention induces more parameters and therefore leads to higher computational costs. To deal with this issue, in this work, we propose a Parameter-Free Channel Attention (PFCA) module to boost the performance of popular image classification and image super-resolution networks, but completely sweep out the parameter growth of channel attention. Experiments on CIFAR-100, ImageNet, and DIV2K validate that our PFCA module improves the performance of ResNet on image classification and improves the performance of MSRResNet on image super-resolution tasks, respectively, while bringing little growth of parameters and FLOPs.

Toward Super-Resolution for Appearance-Based Gaze Estimation

Mar 17, 2023
Galen O'Shea, Majid Komeili

Figure 1 for Toward Super-Resolution for Appearance-Based Gaze Estimation
Figure 2 for Toward Super-Resolution for Appearance-Based Gaze Estimation
Figure 3 for Toward Super-Resolution for Appearance-Based Gaze Estimation
Figure 4 for Toward Super-Resolution for Appearance-Based Gaze Estimation

Gaze tracking is a valuable tool with a broad range of applications in various fields, including medicine, psychology, virtual reality, marketing, and safety. Therefore, it is essential to have gaze tracking software that is cost-efficient and high-performing. Accurately predicting gaze remains a difficult task, particularly in real-world situations where images are affected by motion blur, video compression, and noise. Super-resolution has been shown to improve image quality from a visual perspective. This work examines the usefulness of super-resolution for improving appearance-based gaze tracking. We show that not all SR models preserve the gaze direction. We propose a two-step framework based on SwinIR super-resolution model. The proposed method consistently outperforms the state-of-the-art, particularly in scenarios involving low-resolution or degraded images. Furthermore, we examine the use of super-resolution through the lens of self-supervised learning for gaze prediction. Self-supervised learning aims to learn from unlabelled data to reduce the amount of required labeled data for downstream tasks. We propose a novel architecture called SuperVision by fusing an SR backbone network to a ResNet18 (with some skip connections). The proposed SuperVision method uses 5x less labeled data and yet outperforms, by 15%, the state-of-the-art method of GazeTR which uses 100% of training data.

SRFormer: Permuted Self-Attention for Single Image Super-Resolution

Mar 17, 2023
Yupeng Zhou, Zhen Li, Chun-Le Guo, Song Bai, Ming-Ming Cheng, Qibin Hou

Figure 1 for SRFormer: Permuted Self-Attention for Single Image Super-Resolution
Figure 2 for SRFormer: Permuted Self-Attention for Single Image Super-Resolution
Figure 3 for SRFormer: Permuted Self-Attention for Single Image Super-Resolution
Figure 4 for SRFormer: Permuted Self-Attention for Single Image Super-Resolution

Previous works have shown that increasing the window size for Transformer-based image super-resolution models (e.g., SwinIR) can significantly improve the model performance but the computation overhead is also considerable. In this paper, we present SRFormer, a simple but novel method that can enjoy the benefit of large window self-attention but introduces even less computational burden. The core of our SRFormer is the permuted self-attention (PSA), which strikes an appropriate balance between the channel and spatial information for self-attention. Our PSA is simple and can be easily applied to existing super-resolution networks based on window self-attention. Without any bells and whistles, we show that our SRFormer achieves a 33.86dB PSNR score on the Urban100 dataset, which is 0.46dB higher than that of SwinIR but uses fewer parameters and computations. We hope our simple and effective approach can serve as a useful tool for future research in super-resolution model design.

Super-Resolution Information Enhancement For Crowd Counting

Mar 13, 2023
Jiahao Xie, Wei Xu, Dingkang Liang, Zhanyu Ma, Kongming Liang, Weidong Liu, Rui Wang, Ling Jin

Figure 1 for Super-Resolution Information Enhancement For Crowd Counting
Figure 2 for Super-Resolution Information Enhancement For Crowd Counting
Figure 3 for Super-Resolution Information Enhancement For Crowd Counting
Figure 4 for Super-Resolution Information Enhancement For Crowd Counting

Crowd counting is a challenging task due to the heavy occlusions, scales, and density variations. Existing methods handle these challenges effectively while ignoring low-resolution (LR) circumstances. The LR circumstances weaken the counting performance deeply for two crucial reasons: 1) limited detail information; 2) overlapping head regions accumulate in density maps and result in extreme ground-truth values. An intuitive solution is to employ super-resolution (SR) pre-processes for the input LR images. However, it complicates the inference steps and thus limits application potentials when requiring real-time. We propose a more elegant method termed Multi-Scale Super-Resolution Module (MSSRM). It guides the network to estimate the lost de tails and enhances the detailed information in the feature space. Noteworthy that the MSSRM is plug-in plug-out and deals with the LR problems with no inference cost. As the proposed method requires SR labels, we further propose a Super-Resolution Crowd Counting dataset (SR-Crowd). Extensive experiments on three datasets demonstrate the superiority of our method. The code will be available at https://github.com/PRIS-CV/MSSRM.git.

* Accepted by ICASSP 2023. The code will be available at https://github.com/PRIS-CV/MSSRM.git 

A New Super-Resolution Measurement of Perceptual Quality and Fidelity

Mar 10, 2023
Sheng Cheng

Figure 1 for A New Super-Resolution Measurement of Perceptual Quality and Fidelity
Figure 2 for A New Super-Resolution Measurement of Perceptual Quality and Fidelity
Figure 3 for A New Super-Resolution Measurement of Perceptual Quality and Fidelity
Figure 4 for A New Super-Resolution Measurement of Perceptual Quality and Fidelity

Super-resolution results are usually measured by full-reference image quality metrics or human rating scores. However, these evaluation methods are general image quality measurement, and do not account for the nature of the super-resolution problem. In this work, we analyze the evaluation problem based on the one-to-many mapping nature of super-resolution, and propose a novel distribution-based metric for super-resolution. Starting from the distribution distance, we derive the proposed metric to make it accessible and easy to compute. Through a human subject study on super-resolution, we show that the proposed metric is highly correlated with the human perceptual quality, and better than most existing metrics. Moreover, the proposed metric has a higher correlation with the fidelity measure compared to the perception-based metrics. To understand the properties of the proposed metric, we conduct extensive evaluation in terms of its design choices, and show that the metric is robust to its design choices. Finally, we show that the metric can be used to train super-resolution networks for better perceptual quality.

CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution

Mar 12, 2023
Mingming Xiu, Yang Nie, Qing Song, Chun Liu

Figure 1 for CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution
Figure 2 for CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution
Figure 3 for CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution
Figure 4 for CoT-MISR:Marrying Convolution and Transformer for Multi-Image Super-Resolution

As a method of image restoration, image super-resolution has been extensively studied at first. How to transform a low-resolution image to restore its high-resolution image information is a problem that researchers have been exploring. In the early physical transformation methods, the high-resolution pictures generated by these methods always have a serious problem of missing information, and the edges and details can not be well recovered. With the development of hardware technology and mathematics, people begin to use in-depth learning methods for image super-resolution tasks, from direct in-depth learning models, residual channel attention networks, bi-directional suppression networks, to tr networks with transformer network modules, which have gradually achieved good results. In the research of multi-graph super-resolution, thanks to the establishment of multi-graph super-resolution dataset, we have experienced the evolution from convolution model to transformer model, and the quality of super-resolution has been continuously improved. However, we find that neither pure convolution nor pure tr network can make good use of low-resolution image information. Based on this, we propose a new end-to-end CoT-MISR network. CoT-MISR network makes up for local and global information by using the advantages of convolution and tr. The validation of dataset under equal parameters shows that our CoT-MISR network has reached the optimal score index.

Depth Super-Resolution from Explicit and Implicit High-Frequency Features

Mar 16, 2023
Xin Qiao, Chenyang Ge, Youmin Zhang, Yanhui Zhou, Fabio Tosi, Matteo Poggi, Stefano Mattoccia

Figure 1 for Depth Super-Resolution from Explicit and Implicit High-Frequency Features
Figure 2 for Depth Super-Resolution from Explicit and Implicit High-Frequency Features
Figure 3 for Depth Super-Resolution from Explicit and Implicit High-Frequency Features
Figure 4 for Depth Super-Resolution from Explicit and Implicit High-Frequency Features

We propose a novel multi-stage depth super-resolution network, which progressively reconstructs high-resolution depth maps from explicit and implicit high-frequency features. The former are extracted by an efficient transformer processing both local and global contexts, while the latter are obtained by projecting color images into the frequency domain. Both are combined together with depth features by means of a fusion strategy within a multi-stage and multi-scale framework. Experiments on the main benchmarks, such as NYUv2, Middlebury, DIML and RGBDD, show that our approach outperforms existing methods by a large margin (~20% on NYUv2 and DIML against the contemporary work DADA, with 16x upsampling), establishing a new state-of-the-art in the guided depth super-resolution task.

Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution

Mar 17, 2023
Zixi Tuo, Huan Yang, Jianlong Fu, Yujie Dun, Xueming Qian

Figure 1 for Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution
Figure 2 for Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution
Figure 3 for Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution
Figure 4 for Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution

Existing real-world video super-resolution (VSR) methods focus on designing a general degradation pipeline for open-domain videos while ignoring data intrinsic characteristics which strongly limit their performance when applying to some specific domains (e.g. animation videos). In this paper, we thoroughly explore the characteristics of animation videos and leverage the rich priors in real-world animation data for a more practical animation VSR model. In particular, we propose a multi-scale Vector-Quantized Degradation model for animation video Super-Resolution (VQD-SR) to decompose the local details from global structures and transfer the degradation priors in real-world animation videos to a learned vector-quantized codebook for degradation modeling. A rich-content Real Animation Low-quality (RAL) video dataset is collected for extracting the priors. We further propose a data enhancement strategy for high-resolution (HR) training videos based on our observation that existing HR videos are mostly collected from the Web which contains conspicuous compression artifacts. The proposed strategy is valid to lift the upper bound of animation VSR performance, regardless of the specific VSR model. Experimental results demonstrate the superiority of the proposed VQD-SR over state-of-the-art methods, through extensive quantitative and qualitative evaluations of the latest animation video super-resolution benchmark.

QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms

Mar 08, 2023
Guillaume Berger, Manik Dhingra, Antoine Mercier, Yashesh Savani, Sunny Panchal, Fatih Porikli

Figure 1 for QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms
Figure 2 for QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms
Figure 3 for QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms
Figure 4 for QuickSRNet: Plain Single-Image Super-Resolution Architecture for Faster Inference on Mobile Platforms

In this work, we present QuickSRNet, an efficient super-resolution architecture for real-time applications on mobile platforms. Super-resolution clarifies, sharpens, and upscales an image to higher resolution. Applications such as gaming and video playback along with the ever-improving display capabilities of TVs, smartphones, and VR headsets are driving the need for efficient upscaling solutions. While existing deep learning-based super-resolution approaches achieve impressive results in terms of visual quality, enabling real-time DL-based super-resolution on mobile devices with compute, thermal, and power constraints is challenging. To address these challenges, we propose QuickSRNet, a simple yet effective architecture that provides better accuracy-to-latency trade-offs than existing neural architectures for single-image super resolution. We present training tricks to speed up existing residual-based super-resolution architectures while maintaining robustness to quantization. Our proposed architecture produces 1080p outputs via 2x upscaling in 2.2 ms on a modern smartphone, making it ideal for high-fps real-time applications.

* 16 pages