Alert button
Picture for Xiaoyu Xiang

Xiaoyu Xiang

Alert button

Learning Neural Duplex Radiance Fields for Real-Time View Synthesis

Apr 20, 2023
Ziyu Wan, Christian Richardt, Aljaž Božič, Chao Li, Vijay Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan, Jing Liao

Figure 1 for Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
Figure 2 for Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
Figure 3 for Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
Figure 4 for Learning Neural Duplex Radiance Fields for Real-Time View Synthesis

Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations - for each pixel. This is prohibitively expensive and makes real-time rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively overcomes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.

* CVPR 2023. Project page: http://raywzy.com/NDRF 
Viaarxiv icon

Efficient and Explicit Modelling of Image Hierarchies for Image Restoration

Mar 01, 2023
Yawei Li, Yuchen Fan, Xiaoyu Xiang, Denis Demandolx, Rakesh Ranjan, Radu Timofte, Luc Van Gool

Figure 1 for Efficient and Explicit Modelling of Image Hierarchies for Image Restoration
Figure 2 for Efficient and Explicit Modelling of Image Hierarchies for Image Restoration
Figure 3 for Efficient and Explicit Modelling of Image Hierarchies for Image Restoration
Figure 4 for Efficient and Explicit Modelling of Image Hierarchies for Image Restoration

The aim of this paper is to propose a mechanism to efficiently and explicitly model image hierarchies in the global, regional, and local range for image restoration. To achieve that, we start by analyzing two important properties of natural images including cross-scale similarity and anisotropic image features. Inspired by that, we propose the anchored stripe self-attention which achieves a good balance between the space and time complexity of self-attention and the modelling capacity beyond the regional range. Then we propose a new network architecture dubbed GRL to explicitly model image hierarchies in the Global, Regional, and Local range via anchored stripe self-attention, window self-attention, and channel attention enhanced convolution. Finally, the proposed network is applied to 7 image restoration types, covering both real and synthetic settings. The proposed method sets the new state-of-the-art for several of those. Code will be available at https://github.com/ofsoundof/GRL-Image-Restoration.git.

* Accepted by CVPR 2023. 12 pages, 7 figures, 11 tables 
Viaarxiv icon

FSID: Fully Synthetic Image Denoising via Procedural Scene Generation

Dec 07, 2022
Gyeongmin Choe, Beibei Du, Seonghyeon Nam, Xiaoyu Xiang, Bo Zhu, Rakesh Ranjan

Figure 1 for FSID: Fully Synthetic Image Denoising via Procedural Scene Generation
Figure 2 for FSID: Fully Synthetic Image Denoising via Procedural Scene Generation
Figure 3 for FSID: Fully Synthetic Image Denoising via Procedural Scene Generation
Figure 4 for FSID: Fully Synthetic Image Denoising via Procedural Scene Generation

For low-level computer vision and image processing ML tasks, training on large datasets is critical for generalization. However, the standard practice of relying on real-world images primarily from the Internet comes with image quality, scalability, and privacy issues, especially in commercial contexts. To address this, we have developed a procedural synthetic data generation pipeline and dataset tailored to low-level vision tasks. Our Unreal engine-based synthetic data pipeline populates large scenes algorithmically with a combination of random 3D objects, materials, and geometric transformations. Then, we calibrate the camera noise profiles to synthesize the noisy images. From this pipeline, we generated a fully synthetic image denoising dataset (FSID) which consists of 175,000 noisy/clean image pairs. We then trained and validated a CNN-based denoising model, and demonstrated that the model trained on this synthetic data alone can achieve competitive denoising results when evaluated on real-world noisy images captured with smartphone cameras.

Viaarxiv icon

Recurrent Video Restoration Transformer with Guided Deformable Attention

Jun 05, 2022
Jingyun Liang, Yuchen Fan, Xiaoyu Xiang, Rakesh Ranjan, Eddy Ilg, Simon Green, Jiezhang Cao, Kai Zhang, Radu Timofte, Luc Van Gool

Figure 1 for Recurrent Video Restoration Transformer with Guided Deformable Attention
Figure 2 for Recurrent Video Restoration Transformer with Guided Deformable Attention
Figure 3 for Recurrent Video Restoration Transformer with Guided Deformable Attention
Figure 4 for Recurrent Video Restoration Transformer with Guided Deformable Attention

Video restoration aims at restoring multiple high-quality frames from multiple low-quality frames. Existing video restoration methods generally fall into two extreme cases, i.e., they either restore all frames in parallel or restore the video frame by frame in a recurrent way, which would result in different merits and drawbacks. Typically, the former has the advantage of temporal information fusion. However, it suffers from large model size and intensive memory consumption; the latter has a relatively small model size as it shares parameters across frames; however, it lacks long-range dependency modeling ability and parallelizability. In this paper, we attempt to integrate the advantages of the two cases by proposing a recurrent video restoration transformer, namely RVRT. RVRT processes local neighboring frames in parallel within a globally recurrent framework which can achieve a good trade-off between model size, effectiveness, and efficiency. Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature. Within each clip, different frame features are jointly updated with implicit feature aggregation. Across different clips, the guided deformable attention is designed for clip-to-clip alignment, which predicts multiple relevant locations from the whole inferred clip and aggregates their features by the attention mechanism. Extensive experiments on video super-resolution, deblurring, and denoising show that the proposed RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime.

* Code: https://github.com/JingyunLiang/RVRT 
Viaarxiv icon

HIME: Efficient Headshot Image Super-Resolution with Multiple Exemplars

Mar 28, 2022
Xiaoyu Xiang, Jon Morton, Fitsum A Reda, Lucas Young, Federico Perazzi, Rakesh Ranjan, Amit Kumar, Andrea Colaco, Jan Allebach

Figure 1 for HIME: Efficient Headshot Image Super-Resolution with Multiple Exemplars
Figure 2 for HIME: Efficient Headshot Image Super-Resolution with Multiple Exemplars
Figure 3 for HIME: Efficient Headshot Image Super-Resolution with Multiple Exemplars
Figure 4 for HIME: Efficient Headshot Image Super-Resolution with Multiple Exemplars

A promising direction for recovering the lost information in low-resolution headshot images is utilizing a set of high-resolution exemplars from the same identity. Complementary images in the reference set can improve the generated headshot quality across many different views and poses. However, it is challenging to make the best use of multiple exemplars: the quality and alignment of each exemplar cannot be guaranteed. Using low-quality and mismatched images as references will impair the output results. To overcome these issues, we propose an efficient Headshot Image Super-Resolution with Multiple Exemplars network (HIME) method. Compared with previous methods, our network can effectively handle the misalignment between the input and the reference without requiring facial priors and learn the aggregated reference set representation in an end-to-end manner. Furthermore, to reconstruct more detailed facial features, we propose a correlation loss that provides a rich representation of the local texture in a controllable spatial range. Experimental results demonstrate that the proposed framework not only has significantly fewer computation cost than recent exemplar-guided methods but also achieves better qualitative and quantitative performance.

* Technical Report 
Viaarxiv icon

Learning Spatio-Temporal Downsampling for Effective Video Upscaling

Mar 15, 2022
Xiaoyu Xiang, Yapeng Tian, Vijay Rengarajan, Lucas Young, Bo Zhu, Rakesh Ranjan

Figure 1 for Learning Spatio-Temporal Downsampling for Effective Video Upscaling
Figure 2 for Learning Spatio-Temporal Downsampling for Effective Video Upscaling
Figure 3 for Learning Spatio-Temporal Downsampling for Effective Video Upscaling
Figure 4 for Learning Spatio-Temporal Downsampling for Effective Video Upscaling

Downsampling is one of the most basic image processing operations. Improper spatio-temporal downsampling applied on videos can cause aliasing issues such as moir\'e patterns in space and the wagon-wheel effect in time. Consequently, the inverse task of upscaling a low-resolution, low frame-rate video in space and time becomes a challenging ill-posed problem due to information loss and aliasing artifacts. In this paper, we aim to solve the space-time aliasing problem by learning a spatio-temporal downsampler. Towards this goal, we propose a neural network framework that jointly learns spatio-temporal downsampling and upsampling. It enables the downsampler to retain the key patterns of the original video and maximizes the reconstruction performance of the upsampler. To make the downsamping results compatible with popular image and video storage formats, the downsampling results are encoded to uint8 with a differentiable quantization layer. To fully utilize the space-time correspondences, we propose two novel modules for explicit temporal propagation and space-time feature rearrangement. Experimental results show that our proposed method significantly boosts the space-time reconstruction quality by preserving spatial textures and motion patterns in both downsampling and upscaling. Moreover, our framework enables a variety of applications, including arbitrary video resampling, blurry frame reconstruction, and efficient video storage.

* Main paper: 13 pages, 8 figures; appendix: 8 pages, 10 figures 
Viaarxiv icon

STDAN: Deformable Attention Network for Space-Time Video Super-Resolution

Mar 14, 2022
Hai Wang, Xiaoyu Xiang, Yapeng Tian, Wenming Yang, Qingmin Liao

Figure 1 for STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
Figure 2 for STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
Figure 3 for STDAN: Deformable Attention Network for Space-Time Video Super-Resolution
Figure 4 for STDAN: Deformable Attention Network for Space-Time Video Super-Resolution

The target of space-time video super-resolution (STVSR) is to increase the spatial-temporal resolution of low-resolution (LR) and low frame rate (LFR) videos. Recent approaches based on deep learning have made significant improvements, but most of them only use two adjacent frames, that is, short-term features, to synthesize the missing frame embedding, which suffers from fully exploring the information flow of consecutive input LR frames. In addition, existing STVSR models hardly exploit the temporal contexts explicitly to assist high-resolution (HR) frame reconstruction. To address these issues, in this paper, we propose a deformable attention network called STDAN for STVSR. First, we devise a long-short term feature interpolation (LSTFI) module, which is capable of excavating abundant content from more neighboring input frames for the interpolation process through a bidirectional RNN structure. Second, we put forward a spatial-temporal deformable feature aggregation (STDFA) module, in which spatial and temporal contexts in dynamic video frames are adaptively captured and aggregated to enhance SR reconstruction. Experimental results on several datasets demonstrate that our approach outperforms state-of-the-art STVSR methods.

Viaarxiv icon

Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution

Apr 15, 2021
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, Chenliang Xu

Figure 1 for Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution
Figure 2 for Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution
Figure 3 for Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution
Figure 4 for Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution

In this paper, we address the space-time video super-resolution, which aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence. A na\"ive method is to decompose it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). Nevertheless, temporal interpolation and spatial upscaling are intra-related in this problem. Two-stage approaches cannot fully make use of this natural property. Besides, state-of-the-art VFI or VSR deep networks usually have a large frame reconstruction module in order to obtain high-quality photo-realistic video frames, which makes the two-stage approaches have large models and thus be relatively time-consuming. To overcome the issues, we present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video. Instead of reconstructing missing LR intermediate frames as VFI models do, we temporally interpolate LR frame features of the missing LR frames capturing local temporal contexts by a feature temporal interpolation module. Extensive experiments on widely used benchmarks demonstrate that the proposed framework not only achieves better qualitative and quantitative performance on both clean and noisy LR frames but also is several times faster than recent state-of-the-art two-stage networks. The source code is released in https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020 .

* Journal version of "Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution"(CVPR-2020). 14 pages, 14 figures 
Viaarxiv icon

Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis

Apr 12, 2021
Xiaoyu Xiang, Ding Liu, Xiao Yang, Yiheng Zhu, Xiaohui Shen, Jan P. Allebach

Figure 1 for Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis
Figure 2 for Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis
Figure 3 for Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis
Figure 4 for Adversarial Open Domain Adaption for Sketch-to-Photo Synthesis

In this paper, we explore the open-domain sketch-to-photo translation, which aims to synthesize a realistic photo from a freehand sketch with its class label, even if the sketches of that class are missing in the training data. It is challenging due to the lack of training supervision and the large geometry distortion between the freehand sketch and photo domains. To synthesize the absent freehand sketches from photos, we propose a framework that jointly learns sketch-to-photo and photo-to-sketch generation. However, the generator trained from fake sketches might lead to unsatisfying results when dealing with sketches of missing classes, due to the domain gap between synthesized sketches and real ones. To alleviate this issue, we further propose a simple yet effective open-domain sampling and optimization strategy to "fool" the generator into treating fake sketches as real ones. Our method takes advantage of the learned sketch-to-photo and photo-to-sketch mapping of in-domain data and generalizes them to the open-domain classes. We validate our method on the Scribble and SketchyCOCO datasets. Compared with the recent competing methods, our approach shows impressive results in synthesizing realistic color, texture, and maintaining the geometric composition for various categories of open-domain sketches.

* 19 pages, 17 figures 
Viaarxiv icon

Feature-Align Network with Knowledge Distillation for Efficient Denoising

Mar 18, 2021
Lucas D. Young, Fitsum A. Reda, Rakesh Ranjan, Jon Morton, Jun Hu, Yazhu Ling, Xiaoyu Xiang, David Liu, Vikas Chandra

Figure 1 for Feature-Align Network with Knowledge Distillation for Efficient Denoising
Figure 2 for Feature-Align Network with Knowledge Distillation for Efficient Denoising
Figure 3 for Feature-Align Network with Knowledge Distillation for Efficient Denoising
Figure 4 for Feature-Align Network with Knowledge Distillation for Efficient Denoising

We propose an efficient neural network for RAW image denoising. Although neural network-based denoising has been extensively studied for image restoration, little attention has been given to efficient denoising for compute limited and power sensitive devices, such as smartphones and smartwatches. In this paper, we present a novel architecture and a suite of training techniques for high quality denoising in mobile devices. Our work is distinguished by three main contributions. (1) Feature-Align layer that modulates the activations of an encoder-decoder architecture with the input noisy images. The auto modulation layer enforces attention to spatially varying noise that tend to be "washed away" by successive application of convolutions and non-linearity. (2) A novel Feature Matching Loss that allows knowledge distillation from large denoising networks in the form of a perceptual content loss. (3) Empirical analysis of our efficient model trained to specialize on different noise subranges. This opens additional avenue for model size reduction by sacrificing memory for compute. Extensive experimental validation shows that our efficient model produces high quality denoising results that compete with state-of-the-art large networks, while using significantly fewer parameters and MACs. On the Darmstadt Noise Dataset benchmark, we achieve a PSNR of 48.28dB, while using 263 times fewer MACs, and 17.6 times fewer parameters than the state-of-the-art network, which achieves 49.12dB.

Viaarxiv icon