Alert button
Picture for Haibin Ling

Haibin Ling

Alert button

Graph Correspondence Transfer for Person Re-identification

Apr 01, 2018
Qin Zhou, Heng Fan, Shibao Zheng, Hang Su, Xinzhe Li, Shuang Wu, Haibin Ling

Figure 1 for Graph Correspondence Transfer for Person Re-identification
Figure 2 for Graph Correspondence Transfer for Person Re-identification
Figure 3 for Graph Correspondence Transfer for Person Re-identification
Figure 4 for Graph Correspondence Transfer for Person Re-identification

In this paper, we propose a graph correspondence transfer (GCT) approach for person re-identification. Unlike existing methods, the GCT model formulates person re-identification as an off-line graph matching and on-line correspondence transferring problem. In specific, during training, the GCT model aims to learn off-line a set of correspondence templates from positive training pairs with various pose-pair configurations via patch-wise graph matching. During testing, for each pair of test samples, we select a few training pairs with the most similar pose-pair configurations as references, and transfer the correspondences of these references to test pair for feature distance calculation. The matching score is derived by aggregating distances from different references. For each probe image, the gallery image with the highest matching score is the re-identifying result. Compared to existing algorithms, our GCT can handle spatial misalignment caused by large variations in view angles and human poses owing to the benefits of patch-wise graph matching. Extensive experiments on five benchmarks including VIPeR, Road, PRID450S, 3DPES and CUHK01 evidence the superior performance of GCT model over other state-of-the-art methods.

* Accepted to AAAI'18 (Oral). The code is available at http://www.dabi.temple.edu/~hbling/code/gct.htm 
Viaarxiv icon

Parallel Tracking and Verifying

Jan 30, 2018
Heng Fan, Haibin Ling

Figure 1 for Parallel Tracking and Verifying
Figure 2 for Parallel Tracking and Verifying
Figure 3 for Parallel Tracking and Verifying
Figure 4 for Parallel Tracking and Verifying

Being intensively studied, visual object tracking has witnessed great advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing ideas from the success of parallel tracking and mapping in visual SLAM. The proposed PTAV framework is typically composed of two components, a (base) tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V validates the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. Meanwhile, to adapt V to object appearance changes over time, we maintain a dynamic target template pool for adaptive verification, resulting in further performance improvements. In our extensive experiments on popular benchmarks including OTB2015, TC128, UAV20L and VOT2016, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact even outperforms many deep learning based algorithms. Moreover, as a general framework, PTAV is very flexible with great potentials for future improvement and generalization.

* Project is available at http://www.dabi.temple.edu/~hbling/code/PTAV/ptav.htm. arXiv admin note: text overlap with arXiv:1708.00153 
Viaarxiv icon

Dense Recurrent Neural Networks for Scene Labeling

Jan 21, 2018
Heng Fan, Haibin Ling

Figure 1 for Dense Recurrent Neural Networks for Scene Labeling
Figure 2 for Dense Recurrent Neural Networks for Scene Labeling
Figure 3 for Dense Recurrent Neural Networks for Scene Labeling
Figure 4 for Dense Recurrent Neural Networks for Scene Labeling

Recently recurrent neural networks (RNNs) have demonstrated the ability to improve scene labeling through capturing long-range dependencies among image units. In this paper, we propose dense RNNs for scene labeling by exploring various long-range semantic dependencies among image units. In comparison with existing RNN based approaches, our dense RNNs are able to capture richer contextual dependencies for each image unit via dense connections between each pair of image units, which significantly enhances their discriminative power. Besides, to select relevant and meanwhile restrain irrelevant dependencies for each unit from dense connections, we introduce an attention model into dense RNNs. The attention model enables automatically assigning more importance to helpful dependencies while less weight to unconcerned dependencies. Integrating with convolutional neural networks (CNNs), our method achieves state-of-the-art performances on the PASCAL Context, MIT ADE20K and SiftFlow benchmarks.

* Tech. Report 
Viaarxiv icon

Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking

Aug 01, 2017
Heng Fan, Haibin Ling

Figure 1 for Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking
Figure 2 for Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking
Figure 3 for Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking
Figure 4 for Parallel Tracking and Verifying: A Framework for Real-Time and High Accuracy Visual Tracking

Being intensively studied, visual tracking has seen great recent advances in either speed (e.g., with correlation filters) or accuracy (e.g., with deep features). Real-time and high accuracy tracking algorithms, however, remain scarce. In this paper we study the problem from a new perspective and present a novel parallel tracking and verifying (PTAV) framework, by taking advantage of the ubiquity of multi-thread techniques and borrowing from the success of parallel tracking and mapping in visual SLAM. Our PTAV framework typically consists of two components, a tracker T and a verifier V, working in parallel on two separate threads. The tracker T aims to provide a super real-time tracking inference and is expected to perform well most of the time; by contrast, the verifier V checks the tracking results and corrects T when needed. The key innovation is that, V does not work on every frame but only upon the requests from T; on the other end, T may adjust the tracking according to the feedback from V. With such collaboration, PTAV enjoys both the high efficiency provided by T and the strong discriminative power by V. In our extensive experiments on popular benchmarks including OTB2013, OTB2015, TC128 and UAV20L, PTAV achieves the best tracking accuracy among all real-time trackers, and in fact performs even better than many deep learning based solutions. Moreover, as a general framework, PTAV is very flexible and has great rooms for improvement and generalization.

* 9 pages 
Viaarxiv icon

SANet: Structure-Aware Network for Visual Tracking

May 01, 2017
Heng Fan, Haibin Ling

Figure 1 for SANet: Structure-Aware Network for Visual Tracking
Figure 2 for SANet: Structure-Aware Network for Visual Tracking
Figure 3 for SANet: Structure-Aware Network for Visual Tracking
Figure 4 for SANet: Structure-Aware Network for Visual Tracking

Convolutional neural network (CNN) has drawn increasing interest in visual tracking owing to its powerfulness in feature extraction. Most existing CNN-based trackers treat tracking as a classification problem. However, these trackers are sensitive to similar distractors because their CNN models mainly focus on inter-class classification. To address this problem, we use self-structure information of object to distinguish it from distractors. Specifically, we utilize recurrent neural network (RNN) to model object structure, and incorporate it into CNN to improve its robustness to similar distractors. Considering that convolutional layers in different levels characterize the object from different perspectives, we use multiple RNNs to model object structure in different levels respectively. Extensive experiments on three benchmarks, OTB100, TC-128 and VOT2015, show that the proposed algorithm outperforms other methods. Code is released at http://www.dabi.temple.edu/~hbling/code/SANet/SANet.html.

* In CVPR Deep Vision Workshop, 2017 
Viaarxiv icon

Transductive Zero-Shot Learning with a Self-training dictionary approach

Mar 27, 2017
Yunlong Yu, Zhong Ji, Xi Li, Jichang Guo, Zhongfei Zhang, Haibin Ling, Fei Wu

Figure 1 for Transductive Zero-Shot Learning with a Self-training dictionary approach
Figure 2 for Transductive Zero-Shot Learning with a Self-training dictionary approach
Figure 3 for Transductive Zero-Shot Learning with a Self-training dictionary approach
Figure 4 for Transductive Zero-Shot Learning with a Self-training dictionary approach

As an important and challenging problem in computer vision, zero-shot learning (ZSL) aims at automatically recognizing the instances from unseen object classes without training data. To address this problem, ZSL is usually carried out in the following two aspects: 1) capturing the domain distribution connections between seen classes data and unseen classes data; and 2) modeling the semantic interactions between the image feature space and the label embedding space. Motivated by these observations, we propose a bidirectional mapping based semantic relationship modeling scheme that seeks for crossmodal knowledge transfer by simultaneously projecting the image features and label embeddings into a common latent space. Namely, we have a bidirectional connection relationship that takes place from the image feature space to the latent space as well as from the label embedding space to the latent space. To deal with the domain shift problem, we further present a transductive learning approach that formulates the class prediction problem in an iterative refining process, where the object classification capacity is progressively reinforced through bootstrapping-based model updating over highly reliable instances. Experimental results on three benchmark datasets (AwA, CUB and SUN) demonstrate the effectiveness of the proposed approach against the state-of-the-art approaches.

Viaarxiv icon

Multi-level Contextual RNNs with Attention Model for Scene Labeling

Aug 10, 2016
Heng Fan, Xue Mei, Danil Prokhorov, Haibin Ling

Figure 1 for Multi-level Contextual RNNs with Attention Model for Scene Labeling
Figure 2 for Multi-level Contextual RNNs with Attention Model for Scene Labeling
Figure 3 for Multi-level Contextual RNNs with Attention Model for Scene Labeling
Figure 4 for Multi-level Contextual RNNs with Attention Model for Scene Labeling

Context in image is crucial for scene labeling while existing methods only exploit local context generated from a small surrounding area of an image patch or a pixel, by contrast long-range and global contextual information is ignored. To handle this issue, we in this work propose a novel approach for scene labeling by exploring multi-level contextual recurrent neural networks (ML-CRNNs). Specifically, we encode three kinds of contextual cues, i.e., local context, global context and image topic context in structural recurrent neural networks (RNNs) to model long-range local and global dependencies in image. In this way, our method is able to `see' the image in terms of both long-range local and holistic views, and make a more reliable inference for image labeling. Besides, we integrate the proposed contextual RNNs into hierarchical convolutional neural networks (CNNs), and exploit dependence relationships in multiple levels to provide rich spatial and semantic information. Moreover, we novelly adopt an attention model to effectively merge multiple levels and show that it outperforms average- or max-pooling fusion strategies. Extensive experiments demonstrate that the proposed approach achieves new state-of-the-art results on the CamVid, SiftFlow and Stanford-background datasets.

* 8 pages, 8 figures 
Viaarxiv icon

DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection

Jun 07, 2016
Xi Li, Liming Zhao, Lina Wei, Ming-Hsuan Yang, Fei Wu, Yueting Zhuang, Haibin Ling, Jingdong Wang

Figure 1 for DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection
Figure 2 for DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection
Figure 3 for DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection
Figure 4 for DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection

A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network (FCNN) with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with great feature redundancy reduction. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.

* To appear in IEEE Transactions on Image Processing (TIP), Project Website: http://www.zhaoliming.net/research/deepsaliency 
Viaarxiv icon

A Richly Annotated Dataset for Pedestrian Attribute Recognition

Apr 27, 2016
Dangwei Li, Zhang Zhang, Xiaotang Chen, Haibin Ling, Kaiqi Huang

Figure 1 for A Richly Annotated Dataset for Pedestrian Attribute Recognition
Figure 2 for A Richly Annotated Dataset for Pedestrian Attribute Recognition
Figure 3 for A Richly Annotated Dataset for Pedestrian Attribute Recognition
Figure 4 for A Richly Annotated Dataset for Pedestrian Attribute Recognition

In this paper, we aim to improve the dataset foundation for pedestrian attribute recognition in real surveillance scenarios. Recognition of human attributes, such as gender, and clothes types, has great prospects in real applications. However, the development of suitable benchmark datasets for attribute recognition remains lagged behind. Existing human attribute datasets are collected from various sources or an integration of pedestrian re-identification datasets. Such heterogeneous collection poses a big challenge on developing high quality fine-grained attribute recognition algorithms. Furthermore, human attribute recognition are generally severely affected by environmental or contextual factors, such as viewpoints, occlusions and body parts, while existing attribute datasets barely care about them. To tackle these problems, we build a Richly Annotated Pedestrian (RAP) dataset from real multi-camera surveillance scenarios with long term collection, where data samples are annotated with not only fine-grained human attributes but also environmental and contextual factors. RAP has in total 41,585 pedestrian samples, each of which is annotated with 72 attributes as well as viewpoints, occlusions, body parts information. To our knowledge, the RAP dataset is the largest pedestrian attribute dataset, which is expected to greatly promote the study of large-scale attribute recognition systems. Furthermore, we empirically analyze the effects of different environmental and contextual factors on pedestrian attribute recognition. Experimental results demonstrate that viewpoints, occlusions and body parts information could assist attribute recognition a lot in real applications.

* 16 pages, 8 figures 
Viaarxiv icon

A Comparative Study of Object Trackers for Infrared Flying Bird Tracking

Jan 18, 2016
Ying Huang, Hong Zheng, Haibin Ling, Erik Blasch, Hao Yang

Figure 1 for A Comparative Study of Object Trackers for Infrared Flying Bird Tracking
Figure 2 for A Comparative Study of Object Trackers for Infrared Flying Bird Tracking
Figure 3 for A Comparative Study of Object Trackers for Infrared Flying Bird Tracking
Figure 4 for A Comparative Study of Object Trackers for Infrared Flying Bird Tracking

Bird strikes present a huge risk for aircraft, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation. Computer vision based technology has been proposed to automatically detect birds, determine bird flying trajectories, and predict aircraft takeoff delays. However, the characteristics of bird flight using imagery and the performance of existing methods applied to flying bird task are not well known. Therefore, we perform infrared flying bird tracking experiments using 12 state-of-the-art algorithms on a real BIRDSITE-IR dataset to obtain useful clues and recommend feature analysis. We also develop a Struck-scale method to demonstrate the effectiveness of multiple scale sampling adaption in handling the object of flying bird with varying shape and scale. The general analysis can be used to develop specialized bird tracking methods for airport safety, wildness and urban bird population studies.

Viaarxiv icon