Alert button
Picture for Ling Tian

Ling Tian

Alert button

Intensity-free Convolutional Temporal Point Process: Incorporating Local and Global Event Contexts

Jun 24, 2023
Wang-Tao Zhou, Zhao Kang, Ling Tian, Yi Su

Figure 1 for Intensity-free Convolutional Temporal Point Process: Incorporating Local and Global Event Contexts
Figure 2 for Intensity-free Convolutional Temporal Point Process: Incorporating Local and Global Event Contexts
Figure 3 for Intensity-free Convolutional Temporal Point Process: Incorporating Local and Global Event Contexts
Figure 4 for Intensity-free Convolutional Temporal Point Process: Incorporating Local and Global Event Contexts

Event prediction in the continuous-time domain is a crucial but rather difficult task. Temporal point process (TPP) learning models have shown great advantages in this area. Existing models mainly focus on encoding global contexts of events using techniques like recurrent neural networks (RNNs) or self-attention mechanisms. However, local event contexts also play an important role in the occurrences of events, which has been largely ignored. Popular convolutional neural networks, which are designated for local context capturing, have never been applied to TPP modelling due to their incapability of modelling in continuous time. In this work, we propose a novel TPP modelling approach that combines local and global contexts by integrating a continuous-time convolutional event encoder with an RNN. The presented framework is flexible and scalable to handle large datasets with long sequences and complex latent patterns. The experimental result shows that the proposed model improves the performance of probabilistic sequential modelling and the accuracy of event prediction. To our best knowledge, this is the first work that applies convolutional neural networks to TPP modelling.

* Accepted to Information Sciences 
Viaarxiv icon

TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection

Apr 19, 2023
Quanjiang Guo, Zhao Kang, Ling Tian, Zhouguo Chen

Figure 1 for TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection
Figure 2 for TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection
Figure 3 for TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection
Figure 4 for TieFake: Title-Text Similarity and Emotion-Aware Fake News Detection

Fake news detection aims to detect fake news widely spreading on social media platforms, which can negatively influence the public and the government. Many approaches have been developed to exploit relevant information from news images, text, or videos. However, these methods may suffer from the following limitations: (1) ignore the inherent emotional information of the news, which could be beneficial since it contains the subjective intentions of the authors; (2) pay little attention to the relation (similarity) between the title and textual information in news articles, which often use irrelevant title to attract reader' attention. To this end, we propose a novel Title-Text similarity and emotion-aware Fake news detection (TieFake) method by jointly modeling the multi-modal context information and the author sentiment in a unified framework. Specifically, we respectively employ BERT and ResNeSt to learn the representations for text and images, and utilize publisher emotion extractor to capture the author's subjective emotion in the news content. We also propose a scale-dot product attention mechanism to capture the similarity between title features and textual features. Experiments are conducted on two publicly available multi-modal datasets, and the results demonstrate that our proposed method can significantly improve the performance of fake news detection. Our code is available at https://github.com/UESTC-GQJ/TieFake.

* Appear on IJCNN 2023 
Viaarxiv icon

Spacecraft Anomaly Detection with Attention Temporal Convolution Network

Mar 13, 2023
Liang Liu, Ling Tian, Zhao Kang, Tianqi Wan

Spacecraft faces various situations when carrying out exploration missions in complex space, thus monitoring the anomaly status of spacecraft is crucial to the development of \textcolor{blue}{the} aerospace industry. The time series telemetry data generated by on-orbit spacecraft \textcolor{blue}{contains} important information about the status of spacecraft. However, traditional domain knowledge-based spacecraft anomaly detection methods are not effective due to high dimensionality and complex correlation among variables. In this work, we propose an anomaly detection framework for spacecraft multivariate time-series data based on temporal convolution networks (TCNs). First, we employ dynamic graph attention to model the complex correlation among variables and time series. Second, temporal convolution networks with parallel processing ability are used to extract multidimensional \textcolor{blue}{features} for \textcolor{blue}{the} downstream prediction task. Finally, many potential anomalies are detected by the best threshold. Experiments on real NASA SMAP/MSL spacecraft datasets show the superiority of our proposed model with respect to state-of-the-art methods.

Viaarxiv icon

Document-level Relation Extraction with Cross-sentence Reasoning Graph

Mar 07, 2023
Hongfei Liu, Zhao Kang, Lizong Zhang, Ling Tian, Fujun Hua

Figure 1 for Document-level Relation Extraction with Cross-sentence Reasoning Graph
Figure 2 for Document-level Relation Extraction with Cross-sentence Reasoning Graph
Figure 3 for Document-level Relation Extraction with Cross-sentence Reasoning Graph
Figure 4 for Document-level Relation Extraction with Cross-sentence Reasoning Graph

Relation extraction (RE) has recently moved from the sentence-level to document-level, which requires aggregating document information and using entities and mentions for reasoning. Existing works put entity nodes and mention nodes with similar representations in a document-level graph, whose complex edges may incur redundant information. Furthermore, existing studies only focus on entity-level reasoning paths without considering global interactions among entities cross-sentence. To these ends, we propose a novel document-level RE model with a GRaph information Aggregation and Cross-sentence Reasoning network (GRACR). Specifically, a simplified document-level graph is constructed to model the semantic information of all mentions and sentences in a document, and an entity-level graph is designed to explore relations of long-distance cross-sentence entity pairs. Experimental results show that GRACR achieves excellent performance on two public datasets of document-level RE. It is especially effective in extracting potential relations of cross-sentence entity pairs. Our code is available at https://github.com/UESTC-LHF/GRACR.

* This paper is accepted by PAKDD 2023 
Viaarxiv icon

Semantic Representation and Dependency Learning for Multi-Label Image Recognition

Apr 08, 2022
Tao Pu, Lixian Yuan, Hefeng Wu, Tianshui Chen, Ling Tian, Liang Lin

Figure 1 for Semantic Representation and Dependency Learning for Multi-Label Image Recognition
Figure 2 for Semantic Representation and Dependency Learning for Multi-Label Image Recognition
Figure 3 for Semantic Representation and Dependency Learning for Multi-Label Image Recognition
Figure 4 for Semantic Representation and Dependency Learning for Multi-Label Image Recognition

Recently many multi-label image recognition (MLR) works have made significant progress by introducing pre-trained object detection models to generate lots of proposals or utilizing statistical label co-occurrence enhance the correlation among different categories. However, these works have some limitations: (1) the effectiveness of the network significantly depends on pre-trained object detection models that bring expensive and unaffordable computation; (2) the network performance degrades when there exist occasional co-occurrence objects in images, especially for the rare categories. To address these problems, we propose a novel and effective semantic representation and dependency learning (SRDL) framework to learn category-specific semantic representation for each category and capture semantic dependency among all categories. Specifically, we design a category-specific attentional regions (CAR) module to generate channel/spatial-wise attention matrices to guide model to focus on semantic-aware regions. We also design an object erasing (OE) module to implicitly learn semantic dependency among categories by erasing semantic-aware regions to regularize the network training. Extensive experiments and comparisons on two popular MLR benchmark datasets (i.e., MS-COCO and Pascal VOC 2007) demonstrate the effectiveness of the proposed framework over current state-of-the-art algorithms.

* 25 pages, 7 figures 
Viaarxiv icon

Multilayer Graph Contrastive Clustering Network

Dec 28, 2021
Liang Liu, Zhao Kang, Ling Tian, Wenbo Xu, Xixu He

Figure 1 for Multilayer Graph Contrastive Clustering Network
Figure 2 for Multilayer Graph Contrastive Clustering Network
Figure 3 for Multilayer Graph Contrastive Clustering Network
Figure 4 for Multilayer Graph Contrastive Clustering Network

Multilayer graph has garnered plenty of research attention in many areas due to their high utility in modeling interdependent systems. However, clustering of multilayer graph, which aims at dividing the graph nodes into categories or communities, is still at a nascent stage. Existing methods are often limited to exploiting the multiview attributes or multiple networks and ignoring more complex and richer network frameworks. To this end, we propose a generic and effective autoencoder framework for multilayer graph clustering named Multilayer Graph Contrastive Clustering Network (MGCCN). MGCCN consists of three modules: (1)Attention mechanism is applied to better capture the relevance between nodes and neighbors for better node embeddings. (2)To better explore the consistent information in different networks, a contrastive fusion strategy is introduced. (3)MGCCN employs a self-supervised component that iteratively strengthens the node embedding and clustering. Extensive experiments on different types of real-world graph data indicate that our proposed method outperforms state-of-the-art techniques.

Viaarxiv icon

Self-supervised Consensus Representation Learning for Attributed Graph

Aug 10, 2021
Changshu Liu, Liangjian Wen, Zhao Kang, Guangchun Luo, Ling Tian

Figure 1 for Self-supervised Consensus Representation Learning for Attributed Graph
Figure 2 for Self-supervised Consensus Representation Learning for Attributed Graph
Figure 3 for Self-supervised Consensus Representation Learning for Attributed Graph
Figure 4 for Self-supervised Consensus Representation Learning for Attributed Graph

Attempting to fully exploit the rich information of topological structure and node features for attributed graph, we introduce self-supervised learning mechanism to graph representation learning and propose a novel Self-supervised Consensus Representation Learning (SCRL) framework. In contrast to most existing works that only explore one graph, our proposed SCRL method treats graph from two perspectives: topology graph and feature graph. We argue that their embeddings should share some common information, which could serve as a supervisory signal. Specifically, we construct the feature graph of node features via k-nearest neighbor algorithm. Then graph convolutional network (GCN) encoders extract features from two graphs respectively. Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph. Extensive experiments on real citation networks and social networks demonstrate the superiority of our proposed SCRL over the state-of-the-art methods on semi-supervised node classification task. Meanwhile, compared with its main competitors, SCRL is rather efficient.

* Accepted by ACM Multimedia 2021 
Viaarxiv icon

Self-paced Principal Component Analysis

Jun 25, 2021
Zhao Kang, Hongfei Liu, Jiangxin Li, Xiaofeng Zhu, Ling Tian

Figure 1 for Self-paced Principal Component Analysis
Figure 2 for Self-paced Principal Component Analysis
Figure 3 for Self-paced Principal Component Analysis
Figure 4 for Self-paced Principal Component Analysis

Principal Component Analysis (PCA) has been widely used for dimensionality reduction and feature extraction. Robust PCA (RPCA), under different robust distance metrics, such as l1-norm and l2, p-norm, can deal with noise or outliers to some extent. However, real-world data may display structures that can not be fully captured by these simple functions. In addition, existing methods treat complex and simple samples equally. By contrast, a learning pattern typically adopted by human beings is to learn from simple to complex and less to more. Based on this principle, we propose a novel method called Self-paced PCA (SPCA) to further reduce the effect of noise and outliers. Notably, the complexity of each sample is calculated at the beginning of each iteration in order to integrate samples from simple to more complex into training. Based on an alternating optimization, SPCA finds an optimal projection matrix and filters out outliers iteratively. Theoretical analysis is presented to show the rationality of SPCA. Extensive experiments on popular data sets demonstrate that the proposed method can improve the state of-the-art results considerably.

Viaarxiv icon

Towards Clustering-friendly Representations: Subspace Clustering via Graph Filtering

Jun 18, 2021
Zhengrui Ma, Zhao Kang, Guangchun Luo, Ling Tian

Figure 1 for Towards Clustering-friendly Representations: Subspace Clustering via Graph Filtering
Figure 2 for Towards Clustering-friendly Representations: Subspace Clustering via Graph Filtering
Figure 3 for Towards Clustering-friendly Representations: Subspace Clustering via Graph Filtering
Figure 4 for Towards Clustering-friendly Representations: Subspace Clustering via Graph Filtering

Finding a suitable data representation for a specific task has been shown to be crucial in many applications. The success of subspace clustering depends on the assumption that the data can be separated into different subspaces. However, this simple assumption does not always hold since the raw data might not be separable into subspaces. To recover the ``clustering-friendly'' representation and facilitate the subsequent clustering, we propose a graph filtering approach by which a smooth representation is achieved. Specifically, it injects graph similarity into data features by applying a low-pass filter to extract useful data representations for clustering. Extensive experiments on image and document clustering datasets demonstrate that our method improves upon state-of-the-art subspace clustering techniques. Especially, its comparable performance with deep learning methods emphasizes the effectiveness of the simple graph filtering scheme for many real-world applications. An ablation study shows that graph filtering can remove noise, preserve structure in the image, and increase the separability of classes.

* Published in ACM Multimedia 2020 
Viaarxiv icon

Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages

Jun 16, 2021
Yi Luo, Aiguo Chen, Ke Yan, Ling Tian

Figure 1 for Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages
Figure 2 for Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages
Figure 3 for Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages
Figure 4 for Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages

Nowadays, Graph Neural Networks (GNNs) following the Message Passing paradigm become the dominant way to learn on graphic data. Models in this paradigm have to spend extra space to look up adjacent nodes with adjacency matrices and extra time to aggregate multiple messages from adjacent nodes. To address this issue, we develop a method called LinkDist that distils self-knowledge from connected node pairs into a Multi-Layer Perceptron (MLP) without the need to aggregate messages. Experiment with 8 real-world datasets shows the MLP derived from LinkDist can predict the label of a node without knowing its adjacencies but achieve comparable accuracy against GNNs in the contexts of semi- and full-supervised node classification. Moreover, LinkDist benefits from its Non-Message Passing paradigm that we can also distil self-knowledge from arbitrarily sampled node pairs in a contrastive way to further boost the performance of LinkDist.

* 9 pages, 2 figures, 4 tables 
Viaarxiv icon