Alert button
Picture for Magauiya Zhussip

Magauiya Zhussip

Alert button

AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results

Aug 25, 2022
Ren Yang, Radu Timofte, Xin Li, Qi Zhang, Lin Zhang, Fanglong Liu, Dongliang He, Fu li, He Zheng, Weihang Yuan, Pavel Ostyakov, Dmitry Vyal, Magauiya Zhussip, Xueyi Zou, Youliang Yan, Lei Li, Jingzhu Tang, Ming Chen, Shijie Zhao, Yu Zhu, Xiaoran Qin, Chenghua Li, Cong Leng, Jian Cheng, Claudio Rota, Marco Buzzelli, Simone Bianco, Raimondo Schettini, Dafeng Zhang, Feiyu Huang, Shizhuo Liu, Xiaobing Wang, Zhezhu Jin, Bingchen Li, Xin Li, Mingxi Li, Ding Liu, Wenbin Zou, Peijie Dong, Tian Ye, Yunchen Zhang, Ming Tan, Xin Niu, Mustafa Ayazoglu, Marcos Conde, Ui-Jin Choi, Zhuang Jia, Tianyu Xu, Yijian Zhang, Mao Ye, Dengyan Luo, Xiaofeng Pan, Liuhan Peng

Figure 1 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 2 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 3 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results
Figure 4 for AIM 2022 Challenge on Super-Resolution of Compressed Image and Video: Dataset, Methods and Results

This paper reviews the Challenge on Super-Resolution of Compressed Image and Video at AIM 2022. This challenge includes two tracks. Track 1 aims at the super-resolution of compressed image, and Track~2 targets the super-resolution of compressed video. In Track 1, we use the popular dataset DIV2K as the training, validation and test sets. In Track 2, we propose the LDV 3.0 dataset, which contains 365 videos, including the LDV 2.0 dataset (335 videos) and 30 additional videos. In this challenge, there are 12 teams and 2 teams that submitted the final results to Track 1 and Track 2, respectively. The proposed methods and solutions gauge the state-of-the-art of super-resolution on compressed image and video. The proposed LDV 3.0 dataset is available at https://github.com/RenYang-home/LDV_dataset. The homepage of this challenge is at https://github.com/RenYang-home/AIM22_CompressSR.

* Camera-ready version 
Viaarxiv icon

NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results

Jun 07, 2021
Goutam Bhat, Martin Danelljan, Radu Timofte, Kazutoshi Akita, Wooyeong Cho, Haoqiang Fan, Lanpeng Jia, Daeshik Kim, Bruno Lecouat, Youwei Li, Shuaicheng Liu, Ziluan Liu, Ziwei Luo, Takahiro Maeda, Julien Mairal, Christian Micheloni, Xuan Mo, Takeru Oba, Pavel Ostyakov, Jean Ponce, Sanghyeok Son, Jian Sun, Norimichi Ukita, Rao Muhammad Umer, Youliang Yan, Lei Yu, Magauiya Zhussip, Xueyi Zou

Figure 1 for NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results
Figure 2 for NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results
Figure 3 for NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results
Figure 4 for NTIRE 2021 Challenge on Burst Super-Resolution: Methods and Results

This paper reviews the NTIRE2021 challenge on burst super-resolution. Given a RAW noisy burst as input, the task in the challenge was to generate a clean RGB image with 4 times higher resolution. The challenge contained two tracks; Track 1 evaluating on synthetically generated data, and Track 2 using real-world bursts from mobile camera. In the final testing phase, 6 teams submitted results using a diverse set of solutions. The top-performing methods set a new state-of-the-art for the burst super-resolution task.

* NTIRE 2021 Burst Super-Resolution challenge report 
Viaarxiv icon

AIM 2020 Challenge on Learned Image Signal Processing Pipeline

Nov 10, 2020
Andrey Ignatov, Radu Timofte, Zhilu Zhang, Ming Liu, Haolin Wang, Wangmeng Zuo, Jiawei Zhang, Ruimao Zhang, Zhanglin Peng, Sijie Ren, Linhui Dai, Xiaohong Liu, Chengqi Li, Jun Chen, Yuichi Ito, Bhavya Vasudeva, Puneesh Deora, Umapada Pal, Zhenyu Guo, Yu Zhu, Tian Liang, Chenghua Li, Cong Leng, Zhihong Pan, Baopu Li, Byung-Hoon Kim, Joonyoung Song, Jong Chul Ye, JaeHyun Baek, Magauiya Zhussip, Yeskendir Koishekenov, Hwechul Cho Ye, Xin Liu, Xueying Hu, Jun Jiang, Jinwei Gu, Kai Li, Pengliang Tan, Bingxin Hou

Figure 1 for AIM 2020 Challenge on Learned Image Signal Processing Pipeline
Figure 2 for AIM 2020 Challenge on Learned Image Signal Processing Pipeline
Figure 3 for AIM 2020 Challenge on Learned Image Signal Processing Pipeline
Figure 4 for AIM 2020 Challenge on Learned Image Signal Processing Pipeline

This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results. The participating teams were solving a real-world RAW-to-RGB mapping problem, where to goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera. The considered task embraced a number of complex computer vision subtasks, such as image demosaicing, denoising, white balancing, color and contrast correction, demoireing, etc. The target metric used in this challenge combined fidelity scores (PSNR and SSIM) with solutions' perceptual results measured in a user study. The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical image signal processing pipeline modeling.

* Published in ECCV 2020 Workshops (Advances in Image Manipulation), https://data.vision.ee.ethz.ch/cvl/aim20/ 
Viaarxiv icon

NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results

May 08, 2020
Abdelrahman Abdelhamed, Mahmoud Afifi, Radu Timofte, Michael S. Brown, Yue Cao, Zhilu Zhang, Wangmeng Zuo, Xiaoling Zhang, Jiye Liu, Wendong Chen, Changyuan Wen, Meng Liu, Shuailin Lv, Yunchao Zhang, Zhihong Pan, Baopu Li, Teng Xi, Yanwen Fan, Xiyu Yu, Gang Zhang, Jingtuo Liu, Junyu Han, Errui Ding, Songhyun Yu, Bumjun Park, Jechang Jeong, Shuai Liu, Ziyao Zong, Nan Nan, Chenghua Li, Zengli Yang, Long Bao, Shuangquan Wang, Dongwoon Bai, Jungwon Lee, Youngjung Kim, Kyeongha Rho, Changyeop Shin, Sungho Kim, Pengliang Tang, Yiyun Zhao, Yuqian Zhou, Yuchen Fan, Thomas Huang, Zhihao Li, Nisarg A. Shah, Wei Liu, Qiong Yan, Yuzhi Zhao, Marcin Możejko, Tomasz Latkowski, Lukasz Treszczotko, Michał Szafraniuk, Krzysztof Trojanowski, Yanhong Wu, Pablo Navarrete Michelini, Fengshuo Hu, Yunhua Lu, Sujin Kim, Wonjin Kim, Jaayeon Lee, Jang-Hwan Choi, Magauiya Zhussip, Azamat Khassenov, Jong Hyun Kim, Hwechul Cho, Priya Kansal, Sabari Nathan, Zhangyu Ye, Xiwen Lu, Yaqi Wu, Jiangxin Yang, Yanlong Cao, Siliang Tang, Yanpeng Cao, Matteo Maggioni, Ioannis Marras, Thomas Tanay, Gregory Slabaugh, Youliang Yan, Myungjoo Kang, Han-Soo Choi, Kyungmin Song, Shusong Xu, Xiaomu Lu, Tingniao Wang, Chunxia Lei, Bin Liu, Rajat Gupta, Vineet Kumar

Figure 1 for NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
Figure 2 for NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
Figure 3 for NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
Figure 4 for NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results

This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results. The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark. This challenge is based on a newly collected validation and testing image datasets, and hence, named SIDD+. This challenge has two tracks for quantitatively evaluating image denoising performance in (1) the Bayer-pattern rawRGB and (2) the standard RGB (sRGB) color spaces. Each track ~250 registered participants. A total of 22 teams, proposing 24 methods, competed in the final phase of the challenge. The proposed methods by the participating teams represent the current state-of-the-art performance in image denoising targeting real noisy images. The newly collected SIDD+ datasets are publicly available at: https://bit.ly/siddplus_data.

Viaarxiv icon

Theoretical analysis on Noise2Noise using Stein's Unbiased Risk Estimator for Gaussian denoising: Towards unsupervised training with clipped noisy images

Feb 07, 2019
Magauiya Zhussip, Shakarim Soltanayev, Se Young Chun

Figure 1 for Theoretical analysis on Noise2Noise using Stein's Unbiased Risk Estimator for Gaussian denoising: Towards unsupervised training with clipped noisy images
Figure 2 for Theoretical analysis on Noise2Noise using Stein's Unbiased Risk Estimator for Gaussian denoising: Towards unsupervised training with clipped noisy images
Figure 3 for Theoretical analysis on Noise2Noise using Stein's Unbiased Risk Estimator for Gaussian denoising: Towards unsupervised training with clipped noisy images
Figure 4 for Theoretical analysis on Noise2Noise using Stein's Unbiased Risk Estimator for Gaussian denoising: Towards unsupervised training with clipped noisy images

Recently, Noise2Noise has been proposed for unsupervised training of deep neural networks in image restoration problems including denoising Gaussian noise. However, it does not work well for truncated noise with non-zero mean. Here, we perform theoretical analysis on Noise2Noise for the limited case of Gaussian noise removal using Stein's Unbiased Risk Estimator (SURE). We extend SURE to deal with a pair of noise realizations to directly compare with Noise2Noise. Then, we show that Noise2Noise with Gaussian noise is a special case of our newly extended SURE with a pair of uncorrelated noise realizations. Lastly, we propose a compensation method for clipped Gaussian noise to approximately follow Normal distribution and show how this compensation method can be used for SURE based unsupervised denoiser training. We also show that our theoretical analysis provides insights on how to use Noise2Noise for clipped Gaussian noise.

* 9 pages, 5 figures 
Viaarxiv icon

Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements

Jun 04, 2018
Magauiya Zhussip, Se Young Chun

Figure 1 for Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements
Figure 2 for Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements
Figure 3 for Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements
Figure 4 for Simultaneous compressive image recovery and deep denoiser learning from undersampled measurements

Compressive image recovery utilizes sparse image priors such as wavelet l1 norm, total-variation (TV) norm, or self-similarity to reconstruct good quality images from highly compressive samples. Recently, there have been some attempts to exploit data-driven image priors from massive amount of clean images in compressive image recovery such as LDAMP algorithm. By utilizing large-scale noiseless images for training deep neural network denoisers, LDAMP outperformed other conventional compressive image reconstruction methods. However, one drawback of LDAMP is that large-scale noiseless images must be acquired for deep learning based denoisers. In this article, we propose a method for simultaneous compressive image recovery and deep denoiser learning from undersampled measurements that enables compressive image recovery methods to use data-driven image priors when only large-scale compressive samples are available without ground truth images. By utilizing the structure of LDAMP and Stein's Unbiased Risk Estimator (SURE) based deep neural network denoiser, we showed that our proposed methods were able to achieve better performance than other methods such as conventional BM3D-AMP and LDAMP methods trained with the results of BM3D-AMP for training data and/or testing data for all cases with i.i.d. Gaussian random and coded diffraction measurement matrices at various compression ratios. We also investigated accurate noise level estimation methods in LDAMP for coded diffraction measurement matrix to train deep denoiser networks for high performance.

* 10 pages, 3 figures, 2 tables 
Viaarxiv icon