In recent years, hyperspectral anomaly detection (HAD) has become an active topic and plays a significant role in military and civilian fields. As a classic HAD method, the collaboration representation-based detector (CRD) has attracted extensive attention and in-depth research. Despite the good performance of CRD method, its computational cost is too high for the widely demanded real-time applications. To alleviate this problem, a novel ensemble and random collaborative representation-based detector (ERCRD) is proposed for HAD. This approach comprises two main steps. Firstly, we propose a random background modeling to replace the sliding dual window strategy used in the original CRD method. Secondly, we can obtain multiple detection results through multiple random background modeling, and these results are further refined to final detection result through ensemble learning. Experiments on four real hyperspectral datasets exhibit the accuracy and efficiency of this proposed ERCRD method compared with ten state-of-the-art HAD methods.
In the field of data mining, how to deal with high-dimensional data is an inevitable problem. Unsupervised feature selection has attracted more and more attention because it does not rely on labels. The performance of spectral-based unsupervised methods depends on the quality of constructed similarity matrix, which is used to depict the intrinsic structure of data. However, real-world data contain a large number of noise samples and features, making the similarity matrix constructed by original data cannot be completely reliable. Worse still, the size of similarity matrix expands rapidly as the number of samples increases, making the computational cost increase significantly. Inspired by principal component analysis, we propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_{2,p}$-norm regularization. The projection matrix, which is used for feature selection, is learned by minimizing the reconstruction error under the sparse constraint. Then, we present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically. Finally, extensive experiments on real-world data sets demonstrate the effectiveness of our proposed method.
Recently, text detection for arbitrary shape has attracted more and more search attention. Although segmentation-based methods, which are not limited by the text shape, have been studied to improve the performance, the slow detection speed, complicated post-processing, and text adhesion problem are still limitations for the practical application. In this paper, we propose a simple yet effective arbitrary-shape text detector, named Bold Outline Text Detector (BOTD). It is a novel one-stage detection framework with few post-processing processes. At the same time, the text adhesion problem can also be well alleviated. Specifically, BOTD first generates a center mask (CM) for each text instance, which makes the adhesive text easy to distinguish. Base on the CM, we further compute the polar minimum distance (PMD) for each text instance. PMD denotes the shortest distance between the center point of CM and the outline of the text instance. By dividing the text mask into CM and PMD, the outline of arbitrary-shape text instance can be obtained by simply predicting its CM and PMD. Without any bells and whistles, BOTD achieves an F-measure of 80.1% on CTW1500 with 52 FPS. Note that the post-processing time only accounts for 9% of the whole inference time. Code and trained models will be publicly available soon.
Home-cage social behaviour analysis of mice is an invaluable tool to assess therapeutic efficacy of neurodegenerative diseases. Despite tremendous efforts made within the research community, single-camera video recordings are mainly used for such analysis. Because of the potential to create rich descriptions of mouse social behaviors, the use of multi-view video recordings for rodent observations is increasingly receiving much attention. However, identifying social behaviours from various views is still challenging due to the lack of correspondence across data sources. To address this problem, we here propose a novel multiview latent-attention and dynamic discriminative model that jointly learns view-specific and view-shared sub-structures, where the former captures unique dynamics of each view whilst the latter encodes the interaction between the views. Furthermore, a novel multi-view latent-attention variational autoencoder model is introduced in learning the acquired features, enabling us to learn discriminative features in each view. Experimental results on the standard CRMI13 and our multi-view Parkinson's Disease Mouse Behaviour (PDMB) datasets demonstrate that our model outperforms the other state of the arts technologies and effectively deals with the imbalanced data problem.
Drones shooting can be applied in dynamic traffic monitoring, object detecting and tracking, and other vision tasks. The variability of the shooting location adds some intractable challenges to these missions, such as varying scale, unstable exposure, and scene migration. In this paper, we strive to tackle the above challenges and automatically understand the crowd from the visual data collected from drones. First, to alleviate the background noise generated in cross-scene testing, a double-stream crowd counting model is proposed, which extracts optical flow and frame difference information as an additional branch. Besides, to improve the model's generalization ability at different scales and time, we randomly combine a variety of data transformation methods to simulate some unseen environments. To tackle the crowd density estimation problem under extreme dark environments, we introduce synthetic data generated by game Grand Theft Auto V(GTAV). Experiment results show the effectiveness of the virtual data. Our method wins the challenge with a mean absolute error (MAE) of 12.70. Moreover, a comprehensive ablation study is conducted to explore each component's contribution.
Graph autoencoder (GAE) serves as an effective unsupervised learning framework to represent graph data in a latent space for network embedding. Most exiting approaches typically focus on minimizing the reconstruction loss of graph structure but neglect the reconstruction of node features, which may result in overfitting due to the capacity of the autoencoders. Additionally, the adjacency matrix in these methods is always fixed such that the adjacency matrix cannot properly represent the connections among nodes in latent space. To solve this problem, in this paper, we propose a novel Graph Convolutional Auto-encoder with Bidecoder and Adaptive-sharing Adjacency method, namely BAGA. The framework encodes the topological structure and node features into latent representations, on which a bi-decoder is trained to reconstruct the graph structure and node features simultaneously. Furthermore, the adjacency matrix can be adaptively updated by the learned latent representations for better representing the connections among nodes in latent space. Experimental results on datasets validate the superiority of our method to the state-of-the-art network embedding methods on the clustering task.
Graph based clustering plays an important role in clustering area. Recent studies about graph convolution neural networks have achieved impressive success on graph type data. However, in traditional clustering tasks, the graph structure of data does not exist such that the strategy to construct graph is crucial for performance. In addition, the existing graph auto-encoder based approaches perform poorly on weighted graph, which is widely used in graph based clustering. In this paper, we propose a graph auto-encoder with local structure preserving for general data clustering, which can update the constructed graph adaptively. The adaptive process is designed to utilize the non-Euclidean structure sufficiently. By combining generative model for graph embedding and graph based clustering, a graph auto-encoder with a novel decoder is developed and it performs well in weighted graph used scenarios. Extensive experiments prove the superiority of our model.
Graph convolution networks have attracted many attentions and several graph auto-encoder based clustering models are developed for attributed graph clustering. However, most existing approaches separate clustering and optimization of graph auto-encoder into two individual steps. In this paper, we propose a graph convolution network based clustering model, namely, Embedding Graph Auto-Encoder with JOint Clustering via Adjacency Sharing (\textit{EGAE-JOCAS}). As for the embedded model, we develop a novel joint clustering method, which combines relaxed k-means and spectral clustering and is applicable for the learned embedding. The proposed joint clustering shares the same adjacency within graph convolution layers. Two parts are optimized simultaneously through performing SGD and taking close-form solutions alternatively to ensure a rapid convergence. Moreover, our model is free to incorporate any mechanisms (e.g., attention) into graph auto-encoder. Extensive experiments are conducted to prove the superiority of EGAE-JOCAS. Sufficient theoretical analyses are provided to support the results.