Alert button
Picture for Yueyu Hu

Yueyu Hu

Alert button

Learning Neural Volumetric Field for Point Cloud Geometry Compression

Dec 11, 2022
Yueyu Hu, Yao Wang

Figure 1 for Learning Neural Volumetric Field for Point Cloud Geometry Compression
Figure 2 for Learning Neural Volumetric Field for Point Cloud Geometry Compression
Figure 3 for Learning Neural Volumetric Field for Point Cloud Geometry Compression
Figure 4 for Learning Neural Volumetric Field for Point Cloud Geometry Compression

Due to the diverse sparsity, high dimensionality, and large temporal variation of dynamic point clouds, it remains a challenge to design an efficient point cloud compression method. We propose to code the geometry of a given point cloud by learning a neural volumetric field. Instead of representing the entire point cloud using a single overfit network, we divide the entire space into small cubes and represent each non-empty cube by a neural network and an input latent code. The network is shared among all the cubes in a single frame or multiple frames, to exploit the spatial and temporal redundancy. The neural field representation of the point cloud includes the network parameters and all the latent codes, which are generated by using back-propagation over the network parameters and its input. By considering the entropy of the network parameters and the latent codes as well as the distortion between the original and reconstructed cubes in the loss function, we derive a rate-distortion (R-D) optimal representation. Experimental results show that the proposed coding scheme achieves superior R-D performances compared to the octree-based G-PCC, especially when applied to multiple frames of a point cloud video. The code is available at https://github.com/huzi96/NVFPCC/.

* In Proceedings of 2022 Picture Coding Symposium (PCS) 
Viaarxiv icon

Learning to Predict on Octree for Scalable Point Cloud Geometry Coding

Sep 06, 2022
Yixiang Mao, Yueyu Hu, Yao Wang

Figure 1 for Learning to Predict on Octree for Scalable Point Cloud Geometry Coding
Figure 2 for Learning to Predict on Octree for Scalable Point Cloud Geometry Coding
Figure 3 for Learning to Predict on Octree for Scalable Point Cloud Geometry Coding
Figure 4 for Learning to Predict on Octree for Scalable Point Cloud Geometry Coding

Octree-based point cloud representation and compression have been adopted by the MPEG G-PCC standard. However, it only uses handcrafted methods to predict the probability that a leaf node is non-empty, which is then used for entropy coding. We propose a novel approach for predicting such probabilities for geometry coding, which applies a denoising neural network to a "noisy" context cube that includes both neighboring decoded voxels as well as uncoded voxels. We further propose a convolution-based model to upsample the decoded point cloud at a coarse resolution on the decoder side. Integration of the two approaches significantly improves the rate-distortion performance for geometry coding compared to the original G-PCC standard and other baseline methods for dense point clouds. The proposed octree-based entropy coding approach is naturally scalable, which is desirable for dynamic rate adaptation in point cloud streaming systems.

* Accepted and presented at IEEE MIPR conference 
Viaarxiv icon

Neural Data-Dependent Transform for Learned Image Compression

Mar 30, 2022
Dezhao Wang, Wenhan Yang, Yueyu Hu, Jiaying Liu

Figure 1 for Neural Data-Dependent Transform for Learned Image Compression
Figure 2 for Neural Data-Dependent Transform for Learned Image Compression
Figure 3 for Neural Data-Dependent Transform for Learned Image Compression
Figure 4 for Neural Data-Dependent Transform for Learned Image Compression

Learned image compression has achieved great success due to its excellent modeling capacity, but seldom further considers the Rate-Distortion Optimization (RDO) of each input image. To explore this potential in the learned codec, we make the first attempt to build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image. Specifically, apart from the image content stream, we employ an additional model stream to generate the transform parameters at the decoder side. The presence of a model stream enables our model to learn more abstract neural-syntax, which helps cluster the latent representations of images more compactly. Beyond the transform stage, we also adopt neural-syntax based post-processing for the scenarios that require higher quality reconstructions regardless of extra decoding overhead. Moreover, the involvement of the model stream further makes it possible to optimize both the representation and the decoder in an online way, i.e. RDO at the testing time. It is equivalent to a continuous online mode decision, like coding modes in the traditional codecs, to improve the coding efficiency based on the individual input image. The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism, demonstrating the superiority of our method in coding efficiency compared to the latest conventional standard Versatile Video Coding (VVC) and other state-of-the-art learning-based methods.

* Accepted by CVPR 2022. Project page: https://dezhao-wang.github.io/Neural-Syntax-Website/ 
Viaarxiv icon

Towards Low Light Enhancement with RAW Images

Dec 28, 2021
Haofeng Huang, Wenhan Yang, Yueyu Hu, Jiaying Liu, Ling-Yu Duan

Figure 1 for Towards Low Light Enhancement with RAW Images
Figure 2 for Towards Low Light Enhancement with RAW Images
Figure 3 for Towards Low Light Enhancement with RAW Images
Figure 4 for Towards Low Light Enhancement with RAW Images

In this paper, we make the first benchmark effort to elaborate on the superiority of using RAW images in the low light enhancement and develop a novel alternative route to utilize RAW images in a more flexible and practical way. Inspired by a full consideration on the typical image processing pipeline, we are inspired to develop a new evaluation framework, Factorized Enhancement Model (FEM), which decomposes the properties of RAW images into measurable factors and provides a tool for exploring how properties of RAW images affect the enhancement performance empirically. The empirical benchmark results show that the Linearity of data and Exposure Time recorded in meta-data play the most critical role, which brings distinct performance gains in various measures over the approaches taking the sRGB images as input. With the insights obtained from the benchmark results in mind, a RAW-guiding Exposure Enhancement Network (REENet) is developed, which makes trade-offs between the advantages and inaccessibility of RAW images in real applications in a way of using RAW images only in the training phase. REENet projects sRGB images into linear RAW domains to apply constraints with corresponding RAW images to reduce the difficulty of modeling training. After that, in the testing phase, our REENet does not rely on RAW images. Experimental results demonstrate not only the superiority of REENet to state-of-the-art sRGB-based methods and but also the effectiveness of the RAW guidance and all components.

Viaarxiv icon

Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics

Oct 18, 2021
Wenhan Yang, Haofeng Huang, Yueyu Hu, Ling-Yu Duan, Jiaying Liu

Figure 1 for Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics
Figure 2 for Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics
Figure 3 for Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics
Figure 4 for Video Coding for Machine: Compact Visual Representation Compression for Intelligent Collaborative Analytics

Video Coding for Machines (VCM) is committed to bridging to an extent separate research tracks of video/image compression and feature compression, and attempts to optimize compactness and efficiency jointly from a unified perspective of high accuracy machine vision and full fidelity human vision. In this paper, we summarize VCM methodology and philosophy based on existing academia and industrial efforts. The development of VCM follows a general rate-distortion optimization, and the categorization of key modules or techniques is established. From previous works, it is demonstrated that, although existing works attempt to reveal the nature of scalable representation in bits when dealing with machine and human vision tasks, there remains a rare study in the generality of low bit rate representation, and accordingly how to support a variety of visual analytic tasks. Therefore, we investigate a novel visual information compression for the analytics taxonomy problem to strengthen the capability of compact visual representations extracted from multiple tasks for visual analytics. A new perspective of task relationships versus compression is revisited. By keeping in mind the transferability among different machine vision tasks (e.g. high-level semantic and mid-level geometry-related), we aim to support multiple tasks jointly at low bit rates. In particular, to narrow the dimensionality gap between neural network generated features extracted from pixels and a variety of machine vision features/labels (e.g. scene class, segmentation labels), a codebook hyperprior is designed to compress the neural network-generated features. As demonstrated in our experiments, this new hyperprior model is expected to improve feature compression efficiency by estimating the signal entropy more accurately, which enables further investigation of the granularity of abstracting compact features among different tasks.

* The first three authors had equal contribution. arXiv admin note: text overlap with arXiv:2106.08512 
Viaarxiv icon

Revisit Visual Representation in Analytics Taxonomy: A Compression Perspective

Jun 16, 2021
Yueyu Hu, Wenhan Yang, Haofeng Huang, Jiaying Liu

Figure 1 for Revisit Visual Representation in Analytics Taxonomy: A Compression Perspective
Figure 2 for Revisit Visual Representation in Analytics Taxonomy: A Compression Perspective
Figure 3 for Revisit Visual Representation in Analytics Taxonomy: A Compression Perspective
Figure 4 for Revisit Visual Representation in Analytics Taxonomy: A Compression Perspective

Visual analytics have played an increasingly critical role in the Internet of Things, where massive visual signals have to be compressed and fed into machines. But facing such big data and constrained bandwidth capacity, existing image/video compression methods lead to very low-quality representations, while existing feature compression techniques fail to support diversified visual analytics applications/tasks with low-bit-rate representations. In this paper, we raise and study the novel problem of supporting multiple machine vision analytics tasks with the compressed visual representation, namely, the information compression problem in analytics taxonomy. By utilizing the intrinsic transferability among different tasks, our framework successfully constructs compact and expressive representations at low bit-rates to support a diversified set of machine vision tasks, including both high-level semantic-related tasks and mid-level geometry analytic tasks. In order to impose compactness in the representations, we propose a codebook-based hyperprior, which helps map the representation into a low-dimensional manifold. As it well fits the signal structure of the deep visual feature, it facilitates more accurate entropy estimation, and results in higher compression efficiency. With the proposed framework and the codebook-based hyperprior, we further investigate the relationship of different task features owning different levels of abstraction granularity. Experimental results demonstrate that with the proposed scheme, a set of diversified tasks can be supported at a significantly lower bit-rate, compared with existing compression schemes.

Viaarxiv icon

Learning End-to-End Lossy Image Compression: A Benchmark

Feb 19, 2020
Yueyu Hu, Wenhan Yang, Zhan Ma, Jiaying Liu

Figure 1 for Learning End-to-End Lossy Image Compression: A Benchmark
Figure 2 for Learning End-to-End Lossy Image Compression: A Benchmark
Figure 3 for Learning End-to-End Lossy Image Compression: A Benchmark
Figure 4 for Learning End-to-End Lossy Image Compression: A Benchmark

Image compression is one of the most fundamental techniques and commonly used applications in the image and video processing field. Earlier methods built a well-designed pipeline, and efforts were made to improve all modules of the pipeline by handcrafted tuning. Later, tremendous contributions were made, especially when data-driven methods revitalized the domain with their excellent modeling capacities and flexibility in incorporating newly designed modules and constraints. Despite great progress, a systematic benchmark and comprehensive analysis of end-to-end learned image compression methods are lacking. In this paper, we first conduct a comprehensive literature survey of learned image compression methods. The literature is organized based on several aspects to jointly optimize the rate-distortion performance with a neural network, i.e., network architecture, entropy model and rate control. We describe milestones in cutting-edge learned image-compression methods, review a broad range of existing works, and provide insights into their historical development routes. With this survey, the main challenges of image compression methods are revealed, along with opportunities to address the related issues with recent advanced learning methods. This analysis provides an opportunity to take a further step towards higher-efficiency image compression. By introducing a coarse-to-fine hyperprior model for entropy estimation and signal reconstruction, we achieve improved rate-distortion performance, especially on high-resolution images. Extensive benchmark experiments demonstrate the superiority of our model in coding efficiency and the potential for acceleration by large-scale parallel computing devices.

* https://huzi96.github.io/compression-bench.html 
Viaarxiv icon

Towards Coding for Human and Machine Vision: A Scalable Image Coding Approach

Jan 10, 2020
Yueyu Hu, Shuai Yang, Wenhan Yang, Ling-Yu Duan, Jiaying Liu

Figure 1 for Towards Coding for Human and Machine Vision: A Scalable Image Coding Approach
Figure 2 for Towards Coding for Human and Machine Vision: A Scalable Image Coding Approach
Figure 3 for Towards Coding for Human and Machine Vision: A Scalable Image Coding Approach
Figure 4 for Towards Coding for Human and Machine Vision: A Scalable Image Coding Approach

The past decades have witnessed the rapid development of image and video coding techniques in the era of big data. However, the signal fidelity-driven coding pipeline design limits the capability of the existing image/video coding frameworks to fulfill the needs of both machine and human vision. In this paper, we come up with a novel image coding framework by leveraging both the compressive and the generative models, to support machine vision and human perception tasks jointly. Given an input image, the feature analysis is first applied, and then the generative model is employed to perform image reconstruction with features and additional reference pixels, in which compact edge maps are extracted in this work to connect both kinds of vision in a scalable way. The compact edge map serves as the basic layer for machine vision tasks, and the reference pixels act as a sort of enhanced layer to guarantee signal fidelity for human vision. By introducing advanced generative models, we train a flexible network to reconstruct images from compact feature representations and the reference pixels. Experimental results demonstrate the superiority of our framework in both human visual quality and facial landmark detection, which provide useful evidence on the emerging standardization efforts on MPEG VCM (Video Coding for Machine).

* Project page: https://williamyang1991.github.io/projects/VCM-Face/ 
Viaarxiv icon