Alert button
Picture for Zhen Peng

Zhen Peng

Alert button

Long-term Visual Localization with Mobile Sensors

Apr 16, 2023
Shen Yan, Yu Liu, Long Wang, Zehong Shen, Zhen Peng, Haomin Liu, Maojun Zhang, Guofeng Zhang, Xiaowei Zhou

Figure 1 for Long-term Visual Localization with Mobile Sensors
Figure 2 for Long-term Visual Localization with Mobile Sensors
Figure 3 for Long-term Visual Localization with Mobile Sensors
Figure 4 for Long-term Visual Localization with Mobile Sensors

Despite the remarkable advances in image matching and pose estimation, image-based localization of a camera in a temporally-varying outdoor environment is still a challenging problem due to huge appearance disparity between query and reference images caused by illumination, seasonal and structural changes. In this work, we propose to leverage additional sensors on a mobile phone, mainly GPS, compass, and gravity sensor, to solve this challenging problem. We show that these mobile sensors provide decent initial poses and effective constraints to reduce the searching space in image matching and final pose estimation. With the initial pose, we are also able to devise a direct 2D-3D matching network to efficiently establish 2D-3D correspondences instead of tedious 2D-2D matching in existing systems. As no public dataset exists for the studied problem, we collect a new dataset that provides a variety of mobile sensor data and significant scene appearance variations, and develop a system to acquire ground-truth poses for query images. We benchmark our method as well as several state-of-the-art baselines and demonstrate the effectiveness of the proposed approach. The code and dataset will be released publicly.

Viaarxiv icon

Self-Supervised Graph Representation Learning via Global Context Prediction

Mar 03, 2020
Zhen Peng, Yixiang Dong, Minnan Luo, Xiao-Ming Wu, Qinghua Zheng

Figure 1 for Self-Supervised Graph Representation Learning via Global Context Prediction
Figure 2 for Self-Supervised Graph Representation Learning via Global Context Prediction
Figure 3 for Self-Supervised Graph Representation Learning via Global Context Prediction
Figure 4 for Self-Supervised Graph Representation Learning via Global Context Prediction

To take full advantage of fast-growing unlabeled networked data, this paper introduces a novel self-supervised strategy for graph representation learning by exploiting natural supervision provided by the data itself. Inspired by human social behavior, we assume that the global context of each node is composed of all nodes in the graph since two arbitrary entities in a connected network could interact with each other via paths of varying length. Based on this, we investigate whether the global context can be a source of free and effective supervisory signals for learning useful node representations. Specifically, we randomly select pairs of nodes in a graph and train a well-designed neural net to predict the contextual position of one node relative to the other. Our underlying hypothesis is that the representations learned from such within-graph context would capture the global topology of the graph and finely characterize the similarity and differentiation between nodes, which is conducive to various downstream learning tasks. Extensive benchmark experiments including node classification, clustering, and link prediction demonstrate that our approach outperforms many state-of-the-art unsupervised methods and sometimes even exceeds the performance of supervised counterparts.

Viaarxiv icon

Graph Representation Learning via Graphical Mutual Information Maximization

Feb 04, 2020
Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, Junzhou Huang

Figure 1 for Graph Representation Learning via Graphical Mutual Information Maximization
Figure 2 for Graph Representation Learning via Graphical Mutual Information Maximization
Figure 3 for Graph Representation Learning via Graphical Mutual Information Maximization
Figure 4 for Graph Representation Learning via Graphical Mutual Information Maximization

The richness in the content of various information networks such as social networks and communication networks provides the unprecedented potential for learning high-quality expressive representations without external supervision. This paper investigates how to preserve and extract the abundant information from graph-structured data into embedding space in an unsupervised manner. To this end, we propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations. GMI generalizes the idea of conventional mutual information computations from vector space to the graph domain where measuring mutual information from two aspects of node features and topological structure is indispensable. GMI exhibits several benefits: First, it is invariant to the isomorphic transformation of input graphs---an inevitable constraint in many existing graph representation learning algorithms; Besides, it can be efficiently estimated and maximized by current mutual information estimation methods such as MINE; Finally, our theoretical analysis confirms its correctness and rationality. With the aid of GMI, we develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder. Considerable experiments on transductive as well as inductive node classification and link prediction demonstrate that our method outperforms state-of-the-art unsupervised counterparts, and even sometimes exceeds the performance of supervised ones.

Viaarxiv icon

An information-theoretic on-line update principle for perception-action coupling

Apr 16, 2018
Zhen Peng, Tim Genewein, Felix Leibfried, Daniel A. Braun

Figure 1 for An information-theoretic on-line update principle for perception-action coupling
Figure 2 for An information-theoretic on-line update principle for perception-action coupling
Figure 3 for An information-theoretic on-line update principle for perception-action coupling
Figure 4 for An information-theoretic on-line update principle for perception-action coupling

Inspired by findings of sensorimotor coupling in humans and animals, there has recently been a growing interest in the interaction between action and perception in robotic systems [Bogh et al., 2016]. Here we consider perception and action as two serial information channels with limited information-processing capacity. We follow [Genewein et al., 2015] and formulate a constrained optimization problem that maximizes utility under limited information-processing capacity in the two channels. As a solution we obtain an optimal perceptual channel and an optimal action channel that are coupled such that perceptual information is optimized with respect to downstream processing in the action module. The main novelty of this study is that we propose an online optimization procedure to find bounded-optimal perception and action channels in parameterized serial perception-action systems. In particular, we implement the perceptual channel as a multi-layer neural network and the action channel as a multinomial distribution. We illustrate our method in a NAO robot simulator with a simplified cup lifting task.

* 8 pages, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 
Viaarxiv icon