Alert button
Picture for Xiaoyu Xian

Xiaoyu Xian

Alert button

NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution

May 24, 2023
Yexing Song, Meilin Wang, Xiaoyu Xian, Zhijing Yang, Yuming Fan, Yukai Shi

Figure 1 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 2 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 3 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution
Figure 4 for NegVSR: Augmenting Negatives for Generalized Noise Modeling in Real-World Video Super-Resolution

The capability of video super-resolution (VSR) to synthesize high-resolution (HR) video from ideal datasets has been demonstrated in many works. However, applying the VSR model to real-world video with unknown and complex degradation remains a challenging task. First, existing degradation metrics in most VSR methods are not able to effectively simulate real-world noise and blur. On the contrary, simple combinations of classical degradation are used for real-world noise modeling, which led to the VSR model often being violated by out-of-distribution noise. Second, many SR models focus on noise simulation and transfer. Nevertheless, the sampled noise is monotonous and limited. To address the aforementioned problems, we propose a Negatives augmentation strategy for generalized noise modeling in Video Super-Resolution (NegVSR) task. Specifically, we first propose sequential noise generation toward real-world data to extract practical noise sequences. Then, the degeneration domain is widely expanded by negative augmentation to build up various yet challenging real-world noise sets. We further propose the augmented negative guidance loss to learn robust features among augmented negatives effectively. Extensive experiments on real-world datasets (e.g., VideoLQ and FLIR) show that our method outperforms state-of-the-art methods with clear margins, especially in visual quality.

Viaarxiv icon

Open-World Pose Transfer via Sequential Test-Time Adaption

Mar 20, 2023
Junyang Chen, Xiaoyu Xian, Zhijing Yang, Tianshui Chen, Yongyi Lu, Yukai Shi, Jinshan Pan, Liang Lin

Figure 1 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 2 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 3 for Open-World Pose Transfer via Sequential Test-Time Adaption
Figure 4 for Open-World Pose Transfer via Sequential Test-Time Adaption

Pose transfer aims to transfer a given person into a specified posture, has recently attracted considerable attention. A typical pose transfer framework usually employs representative datasets to train a discriminative model, which is often violated by out-of-distribution (OOD) instances. Recently, test-time adaption (TTA) offers a feasible solution for OOD data by using a pre-trained model that learns essential features with self-supervision. However, those methods implicitly make an assumption that all test distributions have a unified signal that can be learned directly. In open-world conditions, the pose transfer task raises various independent signals: OOD appearance and skeleton, which need to be extracted and distributed in speciality. To address this point, we develop a SEquential Test-time Adaption (SETA). In the test-time phrase, SETA extracts and distributes external appearance texture by augmenting OOD data for self-supervised training. To make non-Euclidean similarity among different postures explicit, SETA uses the image representations derived from a person re-identification (Re-ID) model for similarity computation. By addressing implicit posture representation in the test-time sequentially, SETA greatly improves the generalization performance of current pose transfer models. In our experiment, we first show that pose transfer can be applied to open-world applications, including Tiktok reenactment and celebrity motion synthesis.

* We call for a solid pose transfer model that can handle open-world instances beyond a specific dataset 
Viaarxiv icon