Predicting the dynamic behaviors of particles in suspension subject to hydrodynamic interaction (HI) and external drive can be critical for many applications. By harvesting advanced deep learning techniques, the present work introduces a new framework, hydrodynamic interaction graph neural network (HIGNN), for inferring and predicting the particles' dynamics in Stokes suspensions. It overcomes the limitations of traditional approaches in computational efficiency, accuracy, and/or transferability. In particular, by uniting the data structure represented by a graph and the neural networks with learnable parameters, the HIGNN constructs surrogate modeling for the mobility tensor of particles which is the key to predicting the dynamics of particles subject to HI and external forces. To account for the many-body nature of HI, we generalize the state-of-the-art GNN by introducing higher-order connectivity into the graph and the corresponding convolutional operation. For training the HIGNN, we only need the data for a small number of particles in the domain of interest, and hence the training cost can be maintained low. Once constructed, the HIGNN permits fast predictions of the particles' velocities and is transferable to suspensions of different numbers/concentrations of particles in the same domain and to any external forcing. It has the ability to accurately capture both the long-range HI and short-range lubrication effects. We demonstrate the accuracy, efficiency, and transferability of the proposed HIGNN framework in a variety of systems. The requirement on computing resource is minimum: most simulations only require a desktop with one GPU; the simulations for a large suspension of 100,000 particles call for up to 6 GPUs.
Questing for lossy image coding (LIC) with superior efficiency on both compression performance and computation throughput is challenging. The vital factor behind is how to intelligently explore Adaptive Neighborhood Information Aggregation (ANIA) in transform and entropy coding modules. To this aim, Integrated Convolution and Self-Attention (ICSA) unit is first proposed to form content-adaptive transform to dynamically characterize and embed neighborhood information conditioned on the input. Then a Multistage Context Model (MCM) is developed to stagewisely execute context prediction using necessary neighborhood elements for accurate and parallel entropy probability estimation. Both ICSA and MCM are stacked under a Variational Auto-Encoder (VAE) architecture to derive rate-distortion optimized compact representation of input image via end-to-end training. Our method reports the superior compression performance surpassing the VVC Intra with $\approx$15% BD-rate improvement averaged across Kodak, CLIC and Tecnick datasets; and also demonstrates $\approx$10$\times$ speedup of image decoding when compared with other notable learned LIC approaches. All materials are made publicly accessible at https://njuvision.github.io/TinyLIC for reproducible research.
Image signal processing (ISP) is crucial for camera imaging, and neural networks (NN) solutions are extensively deployed for daytime scenes. The lack of sufficient nighttime image dataset and insights on nighttime illumination characteristics poses a great challenge for high-quality rendering using existing NN ISPs. To tackle it, we first built a high-resolution nighttime RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert professionals. Meanwhile, to best capture the characteristics of nighttime illumination light sources, we develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes. Experiments show that our method has better visual quality compared to traditional ISP pipeline, and is ranked at the second place in the NTIRE 2022 Night Photography Rendering Challenge for two tracks by respective People's and Professional Photographer's choices. The code and relevant materials are avaiable on our website: https://njuvision.github.io/CBUnet.
The event camera is a bio-vision inspired camera with high dynamic range, high response speed, and low power consumption, recently attracting extensive attention for its use in vast vision tasks. Unlike the conventional cameras that output intensity frame at a fixed time interval, event camera records the pixel brightness change (a.k.a., event) asynchronously (in time) and sparsely (in space). Existing methods often aggregate events occurred in a predefined temporal duration for downstream tasks, which apparently overlook varying behaviors of fine-grained temporal events. This work proposes the Event Transformer to directly process the event sequence in its native vectorized tensor format. It cascades a Local Transformer (LXformer) for exploiting the local temporal correlation, a Sparse Conformer (SCformer) for embedding the local spatial similarity, and a Global Transformer (GXformer) for further aggregating the global information in a serial means to effectively characterize the time and space correlations from input raw events for the generation of effective spatiotemporal features used for tasks. %In both LXformer and SCformer, Experimental studies have been extensively conducted in comparison to another fourteen existing algorithms upon five different datasets widely used for classification. Quantitative results report the state-of-the-arts classification accuracy and the least computational resource requirements, of the Event Transformer, making it practically attractive for event-based vision tasks.
Recently, numerous learning-based compression methods have been developed with outstanding performance for the coding of the geometry information of point clouds. On the contrary, limited explorations have been devoted to point cloud attribute compression (PCAC). Thus, this study focuses on the PCAC by applying sparse convolution because of its superior efficiency for representing the geometry of unorganized points. The proposed method simply stacks sparse convolutions to construct the variational autoencoder (VAE) framework to compress the color attributes of a given point cloud. To better encode latent elements at the bottleneck, we apply the adaptive entropy model with the joint utilization of hyper prior and autoregressive neighbors to accurately estimate the bit rate. The qualitative measurement of the proposed method already rivals the latest G-PCC (or TMC13) version 14 at a similar bit rate. And, our method shows clear quantitative improvements to G-PCC version 6, and largely outperforms existing learning-based methods, which promises encouraging potentials for learnt PCAC.
End-to-end learned lossy image coders, as opposed to hand-crafted image codecs, have shown increasing superiority in terms of the rate-distortion performance. However, they are mainly treated as a black-box system and their interpretability is not well studied. In this paper, we investigate learned image coders from the perspective of linear transform coding by measuring their channel response and linearity. For different learned image coder designs, we show that their end-to-end learned non-linear transforms share similar properties with linear orthogonal transformations. Our analysis provides insights into understanding how learned image coders work and could benefit future design and development.
Deep neural network based image compression has been extensively studied. Model robustness is largely overlooked, though it is crucial to service enabling. We perform the adversarial attack by injecting a small amount of noise perturbation to original source images, and then encode these adversarial examples using prevailing learnt image compression models. Experiments report severe distortion in the reconstruction of adversarial examples, revealing the general vulnerability of existing methods, regardless of the settings used in underlying compression model (e.g., network architecture, loss function, quality scale) and optimization strategy used for injecting perturbation (e.g., noise threshold, signal distance measurement). Later, we apply the iterative adversarial finetuning to refine pretrained models. In each iteration, random source images and adversarial examples are mixed to update underlying model. Results show the effectiveness of the proposed finetuning strategy by substantially improving the compression model robustness. Overall, our methodology is simple, effective, and generalizable, making it attractive for developing robust learnt image compression solution. All materials have been made publicly accessible at https://njuvision.github.io/RobustNIC for reproducible research.
This study develops a unified Point Cloud Geometry (PCG) compression method through Sparse Tensor Processing (STP) based multiscale representation of voxelized PCG, dubbed as the SparsePCGC. Applying the STP reduces the complexity significantly because it only performs the convolutions centered at Most-Probable Positively-Occupied Voxels (MP-POV). And the multiscale representation facilitates us to compress scale-wise MP-POVs progressively. The overall compression efficiency highly depends on the approximation accuracy of occupancy probability of each MP-POV. Thus, we design the Sparse Convolution based Neural Networks (SparseCNN) consisting of sparse convolutions and voxel re-sampling to extensively exploit priors. We then develop the SparseCNN based Occupancy Probability Approximation (SOPA) model to estimate the occupancy probability in a single-stage manner only using the cross-scale prior or in multi-stage by step-wisely utilizing autoregressive neighbors. Besides, we also suggest the SparseCNN based Local Neighborhood Embedding (SLNE) to characterize the local spatial variations as the feature attribute to improve the SOPA. Our unified approach shows the state-of-art performance in both lossless and lossy compression modes across a variety of datasets including the dense PCGs (8iVFB, Owlii) and the sparse LiDAR PCGs (KITTI, Ford) when compared with the MPEG G-PCC and other popular learning-based compression schemes. Furthermore, the proposed method presents lightweight complexity due to point-wise computation, and tiny storage desire because of model sharing across all scales. We make all materials publicly accessible at https://github.com/NJUVISION/SparsePCGC for reproducible research.
A Transformer-based Image Compression (TIC) approach is developed which reuses the canonical variational autoencoder (VAE) architecture with paired main and hyper encoder-decoders. Both main and hyper encoders are comprised of a sequence of neural transformation units (NTUs) to analyse and aggregate important information for more compact representation of input image, while the decoders mirror the encoder-side operations to generate pixel-domain image reconstruction from the compressed bitstream. Each NTU is consist of a Swin Transformer Block (STB) and a convolutional layer (Conv) to best embed both long-range and short-range information; In the meantime, a casual attention module (CAM) is devised for adaptive context modeling of latent features to utilize both hyper and autoregressive priors. The TIC rivals with state-of-the-art approaches including deep convolutional neural networks (CNNs) based learnt image coding (LIC) methods and handcrafted rules-based intra profile of recently-approved Versatile Video Coding (VVC) standard, and requires much less model parameters, e.g., up to 45% reduction to leading-performance LIC.
Point cloud compression (PCC) has made remarkable achievement in recent years. In the mean time, point cloud quality assessment (PCQA) also realize gratifying development. Some recently emerged metrics present robust performance on public point cloud assessment databases. However, these metrics have not been evaluated specifically for PCC to verify whether they exhibit consistent performance with the subjective perception. In this paper, we establish a new dataset for compression evaluation first, which contains 175 compressed point clouds in total, deriving from 7 compression algorithms with 5 compression levels. Then leveraging the proposed dataset, we evaluate the performance of the existing PCQA metrics in terms of different compression types. The results demonstrate some deficiencies of existing metrics in compression evaluation.