Conventional cameras capture image irradiance on a sensor and convert it to RGB images using an image signal processor (ISP). The images can then be used for photography or visual computing tasks in a variety of applications, such as public safety surveillance and autonomous driving. One can argue that since RAW images contain all the captured information, the conversion of RAW to RGB using an ISP is not necessary for visual computing. In this paper, we propose a novel $\rho$-Vision framework to perform high-level semantic understanding and low-level compression using RAW images without the ISP subsystem used for decades. Considering the scarcity of available RAW image datasets, we first develop an unpaired CycleR2R network based on unsupervised CycleGAN to train modular unrolled ISP and inverse ISP (invISP) models using unpaired RAW and RGB images. We can then flexibly generate simulated RAW images (simRAW) using any existing RGB image dataset and finetune different models originally trained for the RGB domain to process real-world camera RAW images. We demonstrate object detection and image compression capabilities in RAW-domain using RAW-domain YOLOv3 and RAW image compressor (RIC) on snapshots from various cameras. Quantitative results reveal that RAW-domain task inference provides better detection accuracy and compression compared to RGB-domain processing. Furthermore, the proposed \r{ho}-Vision generalizes across various camera sensors and different task-specific models. Additional advantages of the proposed $\rho$-Vision that eliminates the ISP are the potential reductions in computations and processing times.
Top-down methods dominate the field of 3D human pose and shape estimation, because they are decoupled from human detection and allow researchers to focus on the core problem. However, cropping, their first step, discards the location information from the very beginning, which makes themselves unable to accurately predict the global rotation in the original camera coordinate system. To address this problem, we propose to Carry Location Information in Full Frames (CLIFF) into this task. Specifically, we feed more holistic features to CLIFF by concatenating the cropped-image feature with its bounding box information. We calculate the 2D reprojection loss with a broader view of the full frame, taking a projection process similar to that of the person projected in the image. Fed and supervised by global-location-aware information, CLIFF directly predicts the global rotation along with more accurate articulated poses. Besides, we propose a pseudo-ground-truth annotator based on CLIFF, which provides high-quality 3D annotations for in-the-wild 2D datasets and offers crucial full supervision for regression-based methods. Extensive experiments on popular benchmarks show that CLIFF outperforms prior arts by a significant margin, and reaches the first place on the AGORA leaderboard (the SMPL-Algorithms track). The code and data are available at https://github.com/huawei-noah/noah-research/tree/master/CLIFF.
Considering the prevalence of the power-law distribution in user-item networks, hyperbolic space has attracted considerable attention and achieved impressive performance in the recommender system recently. The advantage of hyperbolic recommendation lies in that its exponentially increasing capacity is well-suited to describe the power-law distributed user-item network whereas the Euclidean equivalent is deficient. Nonetheless, it remains unclear which kinds of items can be effectively recommended by the hyperbolic model and which cannot. To address the above concerns, we take the most basic recommendation technique, collaborative filtering, as a medium, to investigate the behaviors of hyperbolic and Euclidean recommendation models. The results reveal that (1) tail items get more emphasis in hyperbolic space than that in Euclidean space, but there is still ample room for improvement; (2) head items receive modest attention in hyperbolic space, which could be considerably improved; (3) and nonetheless, the hyperbolic models show more competitive performance than Euclidean models. Driven by the above observations, we design a novel learning method, named hyperbolic informative collaborative filtering (HICF), aiming to compensate for the recommendation effectiveness of the head item while at the same time improving the performance of the tail item. The main idea is to adapt the hyperbolic margin ranking learning, making its pull and push procedure geometric-aware, and providing informative guidance for the learning of both head and tail items. Extensive experiments back up the analytic findings and also show the effectiveness of the proposed method. The work is valuable for personalized recommendations since it reveals that the hyperbolic space facilitates modeling the tail item, which often represents user-customized preferences or new products.
Image signal processing (ISP) is crucial for camera imaging, and neural networks (NN) solutions are extensively deployed for daytime scenes. The lack of sufficient nighttime image dataset and insights on nighttime illumination characteristics poses a great challenge for high-quality rendering using existing NN ISPs. To tackle it, we first built a high-resolution nighttime RAW-RGB (NR2R) dataset with white balance and tone mapping annotated by expert professionals. Meanwhile, to best capture the characteristics of nighttime illumination light sources, we develop the CBUnet, a two-stage NN ISP to cascade the compensation of color and brightness attributes. Experiments show that our method has better visual quality compared to traditional ISP pipeline, and is ranked at the second place in the NTIRE 2022 Night Photography Rendering Challenge for two tracks by respective People's and Professional Photographer's choices. The code and relevant materials are avaiable on our website: https://njuvision.github.io/CBUnet.
The event camera is a bio-vision inspired camera with high dynamic range, high response speed, and low power consumption, recently attracting extensive attention for its use in vast vision tasks. Unlike the conventional cameras that output intensity frame at a fixed time interval, event camera records the pixel brightness change (a.k.a., event) asynchronously (in time) and sparsely (in space). Existing methods often aggregate events occurred in a predefined temporal duration for downstream tasks, which apparently overlook varying behaviors of fine-grained temporal events. This work proposes the Event Transformer to directly process the event sequence in its native vectorized tensor format. It cascades a Local Transformer (LXformer) for exploiting the local temporal correlation, a Sparse Conformer (SCformer) for embedding the local spatial similarity, and a Global Transformer (GXformer) for further aggregating the global information in a serial means to effectively characterize the time and space correlations from input raw events for the generation of effective spatiotemporal features used for tasks. %In both LXformer and SCformer, Experimental studies have been extensively conducted in comparison to another fourteen existing algorithms upon five different datasets widely used for classification. Quantitative results report the state-of-the-arts classification accuracy and the least computational resource requirements, of the Event Transformer, making it practically attractive for event-based vision tasks.
Graph neural networks generalize conventional neural networks to graph-structured data and have received widespread attention due to their impressive representation ability. In spite of the remarkable achievements, the performance of Euclidean models in graph-related learning is still bounded and limited by the representation ability of Euclidean geometry, especially for datasets with highly non-Euclidean latent anatomy. Recently, hyperbolic space has gained increasing popularity in processing graph data with tree-like structure and power-law distribution, owing to its exponential growth property. In this survey, we comprehensively revisit the technical details of the current hyperbolic graph neural networks, unifying them into a general framework and summarizing the variants of each component. More importantly, we present various HGNN-related applications. Last, we also identify several challenges, which potentially serve as guidelines for further flourishing the achievements of graph learning in hyperbolic spaces.
Instance segmentation in 3D scenes is fundamental in many applications of scene understanding. It is yet challenging due to the compound factors of data irregularity and uncertainty in the numbers of instances. State-of-the-art methods largely rely on a general pipeline that first learns point-wise features discriminative at semantic and instance levels, followed by a separate step of point grouping for proposing object instances. While promising, they have the shortcomings that (1) the second step is not supervised by the main objective of instance segmentation, and (2) their point-wise feature learning and grouping are less effective to deal with data irregularities, possibly resulting in fragmented segmentations. To address these issues, we propose in this work an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points. Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints, and which will be traversed and split at intermediate tree nodes for proposals of object instances. We also design in SSTNet a refinement module, termed CliqueNet, to prune superpoints that may be wrongly grouped into instance proposals. Experiments on the benchmarks of ScanNet and S3DIS show the efficacy of our proposed method. At the time of submission, SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method. The source code in PyTorch is available at https://github.com/Gorilla-Lab-SCUT/SSTNet.
In recent years, the robotics community has made substantial progress in robotic manipulation using deep reinforcement learning (RL). Effectively learning of long-horizon tasks remains a challenging topic. Typical RL-based methods approximate long-horizon tasks as Markov decision processes and only consider current observation (images or other sensor information) as input state. However, such approximation ignores the fact that skill-sequence also plays a crucial role in long-horizon tasks. In this paper, we take both the observation and skill sequences into account and propose a skill-sequence-dependent hierarchical policy for solving a typical long-horizon task. The proposed policy consists of a high-level skill policy (utilizing skill sequences) and a low-level parameter policy (responding to observation) with corresponding training methods, which makes the learning much more sample-efficient. Experiments in simulation demonstrate that our approach successfully solves a long-horizon task and is significantly faster than Proximal Policy Optimization (PPO) and the task schema methods.
Category-level 6D object pose and size estimation is to predict 9 degrees-of-freedom (9DoF) pose configurations of rotation, translation, and size for object instances observed in single, arbitrary views of cluttered scenes. It extends previous related tasks with learning of the two additional rotation angles. This seemingly small difference poses technical challenges due to the learning and prediction in the full rotation space of SO(3). In this paper, we propose a new method of Dual Pose Network with refined learning of pose consistency for this task, shortened as DualPoseNet. DualPoseNet stacks two parallel pose decoders on top of a shared pose encoder, where the implicit decoder predicts object poses with a working mechanism different from that of the explicit one; they thus impose complementary supervision on the training of pose encoder. We construct the encoder based on spherical convolutions, and design a module of Spherical Fusion wherein for a better embedding of pose-sensitive features from the appearance and shape observations. Given no the testing CAD models, it is the novel introduction of the implicit decoder that enables the refined pose prediction during testing, by enforcing the predicted pose consistency between the two decoders using a self-adaptive loss term. Thorough experiments on the benchmark 9DoF object pose datasets of CAMERA25 and REAL275 confirm efficacy of our designs. DualPoseNet outperforms existing methods with a large margin in the regime of high precision.