We present a target-driven navigation approach for improving the cross-target and cross-scene generalization for visual navigation. Our approach incorporates an information-theoretic regularization into a deep reinforcement learning (RL) framework. First, we present a supervised generative model to constrain the intermediate process of the RL policy, which is used to generate a future observation from a current observation and a target. Next, we predict a navigation action by analyzing the difference between the generated future and the current. Our approach takes into account the connection between current observations and targets, and the interrelation between actions and visual transformations. This results in a compact and generalizable navigation model. We perform experiments on the AI2-THOR framework and the Active Vision Dataset (AVD) and show at least 7.8% improvement in navigation success rate and 5.7% in SPL, compared to the supervised baseline, in unexplored environments.
We present a novel trajectory prediction algorithm for pedestrians based on a personality-aware probabilistic feature map. This map is computed using a spatial query structure and each value represents the probability of the predicted pedestrian passing through various positions in the crowd space. We update this map dynamically based on the agents in the environment and prior trajectory of a pedestrian. Furthermore, we estimate the personality characteristics of each pedestrian and use them to improve the prediction by estimating the shortest path in this map. Our approach is general and works well on crowd videos with low and high pedestrian density. We evaluate our model on standard human-trajectory datasets. In practice, our prediction algorithm improves the accuracy by 5-9% over prior algorithms.
Localizing objects with weak supervision in an image is a key problem of the research in computer vision community. Many existing Weakly-Supervised Object Localization (WSOL) approaches tackle this problem by estimating the most discriminative regions with feature maps (activation maps) obtained by Deep Convolutional Neural Network, that is, only the objects or parts of them with the most discriminative response will be located. However, the activation maps often display different local maximum responses or relatively weak response when one image contains multiple objects with the same type or small objects. In this paper, we propose a simple yet effective multi-scale discriminative region discovery method to localize not only more integral objects but also as many as possible with only image-level class labels. The gradient weights flowing into different convolutional layers of CNN are taken as the input of our method, which is different from previous methods only considering that of the final convolutional layer. To mine more discriminative regions for the task of object localization, the multiple local maximum from the gradient weight maps are leveraged to generate the localization map with a parallel sliding window. Furthermore, multi-scale localization maps from different convolutional layers are fused to produce the final result. We evaluate the proposed method with the foundation of VGGnet on the ILSVRC 2016, CUB-200-2011 and PASCAL VOC 2012 datasets. On ILSVRC 2016, the proposed method yields the Top-1 localization error of 48.65\%, which outperforms previous results by 2.75\%. On PASCAL VOC 2012, our approach achieve the highest localization accuracy of 0.43. Even for CUB-200-2011 dataset, our method still achieves competitive results.
Clustering is an effective technique in data mining to group a set of objects in terms of some attributes. Among various clustering approaches, the family of K-Means algorithms gains popularity due to simplicity and efficiency. However, most of existing K-Means based clustering algorithms cannot deal with outliers well and are difficult to efficiently solve the problem embedded the $L_0$-norm constraint. To address the above issues and improve the performance of clustering significantly, we propose a novel clustering algorithm, named REFCMFS, which develops a $L_{2,1}$-norm robust loss as the data-driven item and imposes a $L_0$-norm constraint on the membership matrix to make the model more robust and sparse flexibly. In particular, REFCMFS designs a new way to simplify and solve the $L_0$-norm constraint without any approximate transformation by absorbing $\|\cdot\|_0$ into the objective function through a ranking function. These improvements not only make REFCMFS efficiently obtain more promising performance but also provide a new tractable and skillful optimization method to solve the problem embedded the $L_0$-norm constraint. Theoretical analyses and extensive experiments on several public datasets demonstrate the effectiveness and rationality of our proposed REFCMFS method.
Due to domain bias, directly deploying a deep person re-identification (re-ID) model trained on one dataset often achieves considerably poor accuracy on another dataset. In this paper, we propose an Adaptive Exploration (AE) method to address the domain-shift problem for re-ID in an unsupervised manner. Specifically, with supervised training on the source dataset, in the target domain, the re-ID model is inducted to 1) maximize distances between all person images and 2) minimize distances between similar person images. In the first case, by treating each person image as an individual class, a non-parametric classifier with a feature memory is exploited to encourage person images to move away from each other. In the second case, according to a similarity threshold, our method adaptively selects neighborhoods in the feature space for each person image. By treating these similar person images as the same class, the non-parametric classifier forces them to stay closer. However, a problem of adaptive selection is that, when an image has too many neighborhoods, it is more likely to attract other images as its neighborhoods. As a result, a minority of images may select a large number of neighborhoods while a majority of images has only a few neighborhoods. To address this issue, we additionally integrate a balance strategy into the adaptive selection. Extensive experiments on large-scale re-ID datasets demonstrate the effectiveness of our method. Our code has been released at https://github.com/dyh127/Adaptive-Exploration-for-Unsupervised-Person-Re-Identification.
In this paper, we study how to predict the results of LTL model checking using some machine learning algorithms. Some Kripke structures and LTL formulas and their model checking results are made up data set. The approaches based on the Random Forest (RF), K-Nearest Neighbors (KNN), Decision tree (DT), and Logistic Regression (LR) are used to training and prediction. The experiment results show that the average computation efficiencies of the RF, LR, DT, and KNN-based approaches are 2066181, 2525333, 1894000 and 294 times than that of the existing approach, respectively.
This paper proposes a method based on repulsive forces and sparse reconstruction for the detection and location of abnormal events in crowded scenes. In order to avoid the challenging problem of accurately tracking each specific individual in a dense or complex scene, we divide each frame of the surveillance video into a fixed number of grids and select a single representative point in each grid as the individual to track. The repulsive force model, which can accurately reflect interactive behaviors of crowds, is used to calculate the interactive forces between grid particles in crowded scenes and to construct a force flow matrix using these discrete forces from a fixed number of continuous frames. The force flow matrix, which contains spatial and temporal information, is adopted to train a group of visual dictionaries by sparse coding. To further improve the detection efficiency and avoid concept drift, we propose a fully unsupervised global and local dynamic updating algorithm, based on sparse reconstruction and a group of word pools. For anomaly location, since our method is based on a fixed grid, we can judge whether anomalies occur in a region intuitively according to the reconstruction error of the corresponding visual words. We experimentally verify the proposed method using the UMN dataset, the UCSD dataset and the Web dataset separately. The results indicate that our method can not only detect abnormal events accurately, but can also pinpoint the location of anomalies.
In this paper, we propose a novel multi-stage network architecture with two branches in each stage to estimate multi-person poses in images. The first branch predicts the confidence maps of joints and uses a geometrical transform kernel to propagate information between neighboring joints at the confidence level. The second branch proposes a bi-directional graph structure information model (BGSIM) to encode rich contextual information and to infer the occlusion relationship among different joints. We dynamically determine the joint point with highest response of the confidence maps as base point of passing message in BGSIM. Based on the proposed network structure, we achieve an average precision of 62.9 on the COCO Keypoint Challenge dataset and 77.6 on the MPII (multi-person) dataset. Compared with other state-of-art methods, our method can achieve highly promising results on our selected multi-person dataset without extra training.
For most of the object detectors based on multi-scale feature maps, the shallow layers are mainly responsible for small object detection due to their fine details. However, the performance of detecting small object instances is still less satisfactory because of the deficiency of semantic information on shallow features. For the top semantic features, the representation of fine details for small objects are potentially wiped out. In this paper, we design a Multi-scale Deconvolutional Single Shot Detector (MDSSD), especially for the detection of small objects. In MDSSD, to generate features with strong representational power for small object instances, we add the high-level features with rich semantic information to the low-level features via deconvolution Fusion Block. It is noteworthy that multiple high-level features with different scales are upsampled simultaneously in our framework. Afterwards, we implement the skip connections to form more descriptive feature maps for small objects and predictions are made on these new fusion features. Our proposed framework achieves 78.6% mAP on PASCAL VOC2007 test and 26.8% mAP on MS COCO test-dev2015 at 38.5 FPS with only 300*300 input. The results outperform baseline SSD by 1.1 and 1.7 points respectively, especially with 2 -- 5 points improvement on some small objects categories.