Alert button
Picture for Xueyang Fu

Xueyang Fu

Alert button

Revisiting Single Image Reflection Removal In the Wild

Nov 29, 2023
Yurui Zhu, Xueyang Fu, Peng-Tao Jiang, Hao Zhang, Qibin Sun, Jinwei Chen, Zheng-Jun Zha, Bo Li

This research focuses on the issue of single-image reflection removal (SIRR) in real-world conditions, examining it from two angles: the collection pipeline of real reflection pairs and the perception of real reflection locations. We devise an advanced reflection collection pipeline that is highly adaptable to a wide range of real-world reflection scenarios and incurs reduced costs in collecting large-scale aligned reflection pairs. In the process, we develop a large-scale, high-quality reflection dataset named Reflection Removal in the Wild (RRW). RRW contains over 14,950 high-resolution real-world reflection pairs, a dataset forty-five times larger than its predecessors. Regarding perception of reflection locations, we identify that numerous virtual reflection objects visible in reflection images are not present in the corresponding ground-truth images. This observation, drawn from the aligned pairs, leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF could accurately and explicitly characterize reflection locations from pairs of images. Building upon this, we design a reflection location-aware cascaded framework, specifically tailored for SIRR. Powered by these innovative techniques, our solution achieves superior performance than current leading methods across multiple real-world benchmarks. Codes and datasets will be publicly available.

Viaarxiv icon

Unfolding Taylor's Approximations for Image Restoration

Sep 08, 2021
Man Zhou, Zeyu Xiao, Xueyang Fu, Aiping Liu, Gang Yang, Zhiwei Xiong

Figure 1 for Unfolding Taylor's Approximations for Image Restoration
Figure 2 for Unfolding Taylor's Approximations for Image Restoration
Figure 3 for Unfolding Taylor's Approximations for Image Restoration
Figure 4 for Unfolding Taylor's Approximations for Image Restoration

Deep learning provides a new avenue for image restoration, which demands a delicate balance between fine-grained details and high-level contextualized information during recovering the latent clear image. In practice, however, existing methods empirically construct encapsulated end-to-end mapping networks without deepening into the rationality, and neglect the intrinsic prior knowledge of restoration task. To solve the above problems, inspired by Taylor's Approximations, we unfold Taylor's Formula to construct a novel framework for image restoration. We find the main part and the derivative part of Taylor's Approximations take the same effect as the two competing goals of high-level contextualized information and spatial details of image restoration respectively. Specifically, our framework consists of two steps, correspondingly responsible for the mapping and derivative functions. The former first learns the high-level contextualized information and the later combines it with the degraded input to progressively recover local high-order spatial details. Our proposed framework is orthogonal to existing methods and thus can be easily integrated with them for further improvement, and extensive experiments demonstrate the effectiveness and scalability of our proposed framework.

Viaarxiv icon

Twice Mixing: A Rank Learning based Quality Assessment Approach for Underwater Image Enhancement

Feb 01, 2021
Zhenqi Fu, Xueyang Fu, Yue Huang, Xinghao Ding

Figure 1 for Twice Mixing: A Rank Learning based Quality Assessment Approach for Underwater Image Enhancement
Figure 2 for Twice Mixing: A Rank Learning based Quality Assessment Approach for Underwater Image Enhancement
Figure 3 for Twice Mixing: A Rank Learning based Quality Assessment Approach for Underwater Image Enhancement
Figure 4 for Twice Mixing: A Rank Learning based Quality Assessment Approach for Underwater Image Enhancement

To improve the quality of underwater images, various kinds of underwater image enhancement (UIE) operators have been proposed during the past few years. However, the lack of effective objective evaluation methods limits the further development of UIE techniques. In this paper, we propose a novel rank learning guided no-reference quality assessment method for UIE. Our approach, termed Twice Mixing, is motivated by the observation that a mid-quality image can be generated by mixing a high-quality image with its low-quality version. Typical mixup algorithms linearly interpolate a given pair of input data. However, the human visual system is non-uniformity and non-linear in processing images. Therefore, instead of directly training a deep neural network based on the mixed images and their absolute scores calculated by linear combinations, we propose to train a Siamese Network to learn their quality rankings. Twice Mixing is trained based on an elaborately formulated self-supervision mechanism. Specifically, before each iteration, we randomly generate two mixing ratios which will be employed for both generating virtual images and guiding the network training. In the test phase, a single branch of the network is extracted to predict the quality rankings of different UIE outputs. We conduct extensive experiments on both synthetic and real-world datasets. Experimental results demonstrate that our approach outperforms the previous methods significantly.

Viaarxiv icon

Real-world Person Re-Identification via Degradation Invariance Learning

Apr 10, 2020
Yukun Huang, Zheng-Jun Zha, Xueyang Fu, Richang Hong, Liang Li

Figure 1 for Real-world Person Re-Identification via Degradation Invariance Learning
Figure 2 for Real-world Person Re-Identification via Degradation Invariance Learning
Figure 3 for Real-world Person Re-Identification via Degradation Invariance Learning
Figure 4 for Real-world Person Re-Identification via Degradation Invariance Learning

Person re-identification (Re-ID) in real-world scenarios usually suffers from various degradation factors, e.g., low-resolution, weak illumination, blurring and adverse weather. On the one hand, these degradations lead to severe discriminative information loss, which significantly obstructs identity representation learning; on the other hand, the feature mismatch problem caused by low-level visual variations greatly reduces retrieval performance. An intuitive solution to this problem is to utilize low-level image restoration methods to improve the image quality. However, existing restoration methods cannot directly serve to real-world Re-ID due to various limitations, e.g., the requirements of reference samples, domain gap between synthesis and reality, and incompatibility between low-level and high-level methods. In this paper, to solve the above problem, we propose a degradation invariance learning framework for real-world person Re-ID. By introducing a self-supervised disentangled representation learning strategy, our method is able to simultaneously extract identity-related robust features and remove real-world degradations without extra supervision. We use low-resolution images as the main demonstration, and experiments show that our approach is able to achieve state-of-the-art performance on several Re-ID benchmarks. In addition, our framework can be easily extended to other real-world degradation factors, such as weak illumination, with only a few modifications.

* To appear in CVPR2020 
Viaarxiv icon

Noise2Blur: Online Noise Extraction and Denoising

Dec 03, 2019
Huangxing Lin, Weihong Zeng, Xinghao Ding, Xueyang Fu, Yue Huang, John Paisley

Figure 1 for Noise2Blur: Online Noise Extraction and Denoising
Figure 2 for Noise2Blur: Online Noise Extraction and Denoising
Figure 3 for Noise2Blur: Online Noise Extraction and Denoising
Figure 4 for Noise2Blur: Online Noise Extraction and Denoising

We propose a new framework called Noise2Blur (N2B) for training robust image denoising models without pre-collected paired noisy/clean images. The training of the model requires only some (or even one) noisy images, some random unpaired clean images, and noise-free but blurred labels obtained by predefined filtering of the noisy images. The N2B model consists of two parts: a denoising network and a noise extraction network. First, the noise extraction network learns to output a noise map using the noise information from the denoising network under the guidence of the blurred labels. Then, the noise map is added to a clean image to generate a new ``noisy/clean'' image pair. Using the new image pair, the denoising network learns to generate clean and high-quality images from noisy observations. These two networks are trained simultaneously and mutually aid each other to learn the mappings of noise to clean/blur. Experiments on several denoising tasks show that the denoising performance of N2B is close to that of other denoising CNNs trained with pre-collected paired data.

Viaarxiv icon

A^2Net: Adjacent Aggregation Networks for Image Raindrop Removal

Nov 24, 2018
Huangxing Lin, Xueyang Fu, Changxing Jing, Xinghao Ding, Yue Huang

Figure 1 for A^2Net: Adjacent Aggregation Networks for Image Raindrop Removal
Figure 2 for A^2Net: Adjacent Aggregation Networks for Image Raindrop Removal
Figure 3 for A^2Net: Adjacent Aggregation Networks for Image Raindrop Removal
Figure 4 for A^2Net: Adjacent Aggregation Networks for Image Raindrop Removal

Existing methods for single images raindrop removal either have poor robustness or suffer from parameter burdens. In this paper, we propose a new Adjacent Aggregation Network (A^2Net) with lightweight architectures to remove raindrops from single images. Instead of directly cascading convolutional layers, we design an adjacent aggregation architecture to better fuse features for rich representations generation, which can lead to high quality images reconstruction. To further simplify the learning process, we utilize a problem-specific knowledge to force the network focus on the luminance channel in the YUV color space instead of all RGB channels. By combining adjacent aggregating operation with color space transformation, the proposed A^2Net can achieve state-of-the-art performances on raindrop removal with significant parameters reduction.

Viaarxiv icon

A Deep Tree-Structured Fusion Model for Single Image Deraining

Nov 21, 2018
Xueyang Fu, Qi Qi, Yue Huang, Xinghao Ding, Feng Wu, John Paisley

Figure 1 for A Deep Tree-Structured Fusion Model for Single Image Deraining
Figure 2 for A Deep Tree-Structured Fusion Model for Single Image Deraining
Figure 3 for A Deep Tree-Structured Fusion Model for Single Image Deraining
Figure 4 for A Deep Tree-Structured Fusion Model for Single Image Deraining

We propose a simple yet effective deep tree-structured fusion model based on feature aggregation for the deraining problem. We argue that by effectively aggregating features, a relatively simple network can still handle tough image deraining problems well. First, to capture the spatial structure of rain we use dilated convolutions as our basic network block. We then design a tree-structured fusion architecture which is deployed within each block (spatial information) and across all blocks (content information). Our method is based on the assumption that adjacent features contain redundant information. This redundancy obstructs generation of new representations and can be reduced by hierarchically fusing adjacent features. Thus, the proposed model is more compact and can effectively use spatial and content information. Experiments on synthetic and real-world datasets show that our network achieves better deraining results with fewer parameters.

Viaarxiv icon

Lightweight Pyramid Networks for Image Deraining

May 16, 2018
Xueyang Fu, Borong Liang, Yue Huang, Xinghao Ding, John Paisley

Figure 1 for Lightweight Pyramid Networks for Image Deraining
Figure 2 for Lightweight Pyramid Networks for Image Deraining
Figure 3 for Lightweight Pyramid Networks for Image Deraining
Figure 4 for Lightweight Pyramid Networks for Image Deraining

Existing deep convolutional neural networks have found major success in image deraining, but at the expense of an enormous number of parameters. This limits their potential application, for example in mobile devices. In this paper, we propose a lightweight pyramid of networks (LPNet) for single image deraining. Instead of designing a complex network structures, we use domain-specific knowledge to simplify the learning process. Specifically, we find that by introducing the mature Gaussian-Laplacian image pyramid decomposition technology to the neural network, the learning problem at each pyramid level is greatly simplified and can be handled by a relatively shallow network with few parameters. We adopt recursive and residual network structures to build the proposed LPNet, which has less than 8K parameters while still achieving state-of-the-art performance on rain removal. We also discuss the potential value of LPNet for other low- and high-level vision tasks.

* submitted to IEEE Transactions on Neural Networks and Learning Systems 
Viaarxiv icon

Residual-Guide Feature Fusion Network for Single Image Deraining

Apr 20, 2018
Zhiwen Fan, Huafeng Wu, Xueyang Fu, Yue Hunag, Xinghao Ding

Figure 1 for Residual-Guide Feature Fusion Network for Single Image Deraining
Figure 2 for Residual-Guide Feature Fusion Network for Single Image Deraining
Figure 3 for Residual-Guide Feature Fusion Network for Single Image Deraining
Figure 4 for Residual-Guide Feature Fusion Network for Single Image Deraining

Single image rain streaks removal is extremely important since rainy images adversely affect many computer vision systems. Deep learning based methods have found great success in image deraining tasks. In this paper, we propose a novel residual-guide feature fusion network, called ResGuideNet, for single image deraining that progressively predicts highquality reconstruction. Specifically, we propose a cascaded network and adopt residuals generated from shallower blocks to guide deeper blocks. By using this strategy, we can obtain a coarse to fine estimation of negative residual as the blocks go deeper. The outputs of different blocks are merged into the final reconstruction. We adopt recursive convolution to build each block and apply supervision to all intermediate results, which enable our model to achieve promising performance on synthetic and real-world data while using fewer parameters than previous required. ResGuideNet is detachable to meet different rainy conditions. For images with light rain streaks and limited computational resource at test time, we can obtain a decent performance even with several building blocks. Experiments validate that ResGuideNet can benefit other low- and high-level vision tasks.

Viaarxiv icon

Clearing the Skies: A deep network architecture for single-image rain removal

Feb 06, 2017
Xueyang Fu, Jiabin Huang, Xinghao Ding, Yinghao Liao, John Paisley

Figure 1 for Clearing the Skies: A deep network architecture for single-image rain removal
Figure 2 for Clearing the Skies: A deep network architecture for single-image rain removal
Figure 3 for Clearing the Skies: A deep network architecture for single-image rain removal
Figure 4 for Clearing the Skies: A deep network architecture for single-image rain removal

We introduce a deep network architecture called DerainNet for removing rain streaks from an image. Based on the deep convolutional neural network (CNN), we directly learn the mapping relationship between rainy and clean image detail layers from data. Because we do not possess the ground truth corresponding to real-world rainy images, we synthesize images with rain for training. In contrast to other common strategies that increase depth or breadth of the network, we use image processing domain knowledge to modify the objective function and improve deraining with a modestly-sized CNN. Specifically, we train our DerainNet on the detail (high-pass) layer rather than in the image domain. Though DerainNet is trained on synthetic data, we find that the learned network translates very effectively to real-world images for testing. Moreover, we augment the CNN framework with image enhancement to improve the visual results. Compared with state-of-the-art single image de-raining methods, our method has improved rain removal and much faster computation time after network training.

Viaarxiv icon