Abstract:Implicit Neural Representations (INRs) have garnered significant attention for their ability to model complex signals across a variety of domains. Recently, INR-based approaches have emerged as promising frameworks for neural video compression. While conventional methods primarily focus on embedding video content into compact neural networks for efficient representation, they often struggle to reconstruct high-frequency details under stringent model size constraints, which are critical in practical compression scenarios. To address this limitation, we propose an INR-based video representation method that integrates a general-purpose super-resolution (SR) network. Motivated by the observation that high-frequency components exhibit low temporal redundancy across frames, our method entrusts the reconstruction of fine details to the SR network. Experimental results demonstrate that the proposed method outperforms conventional INR-based baselines in terms of reconstruction quality, while maintaining comparable model sizes.
Abstract:Implicit neural representations (INRs) embed various signals into networks. They have gained attention in recent years because of their versatility in handling diverse signal types. For videos, INRs achieve video compression by embedding video signals into networks and compressing them. Conventional methods use an index that expresses the time of the frame or the features extracted from the frame as inputs to the network. The latter method provides greater expressive capability as the input is specific to each video. However, the features extracted from frames often contain redundancy, which contradicts the purpose of video compression. Moreover, since frame time information is not explicitly provided to the network, learning the relationships between frames is challenging. To address these issues, we aim to reduce feature redundancy by extracting features based on the high-frequency components of the frames. In addition, we use feature differences between adjacent frames in order for the network to learn frame relationships smoothly. We propose a video representation method that uses the high-frequency components of frames and the differences in features between adjacent frames. The experimental results show that our method outperforms the existing HNeRV method in 90 percent of the videos.