In graph signal processing, one of the most important subjects is the study of filters, i.e., linear transformations that capture relations between graph signals. One of the most important families of filters is the space of shift invariant filters, defined as transformations commute with a preferred graph shift operator. Shift invariant filters have a wide range of applications in graph signal processing and graph neural networks. A shift invariant filter can be interpreted geometrically as an information aggregation procedure (from local neighborhood), and can be computed easily using matrix multiplication. However, there are still drawbacks to using solely shift invariant filters in applications, such as being restrictively homogeneous. In this paper, we generalize shift invariant filters by introducing and studying semi shift invariant filters. We give an application of semi shift invariant filters with a new signal processing framework, the subgraph signal processing. Moreover, we also demonstrate how semi shift invariant filters can be used in graph neural networks.
Heterogeneous graphs have multiple node and edge types and are semantically richer than homogeneous graphs. To learn such complex semantics, many graph neural network approaches for heterogeneous graphs use metapaths to capture multi-hop interactions between nodes. Typically, features from non-target nodes are not incorporated into the learning procedure. However, there can be nonlinear, high-order interactions involving multiple nodes or edges. In this paper, we present Simplicial Graph Attention Network (SGAT), a simplicial complex approach to represent such high-order interactions by placing features from non-target nodes on the simplices. We then use attention mechanisms and upper adjacencies to generate representations. We empirically demonstrate the efficacy of our approach with node classification tasks on heterogeneous graph datasets and further show SGAT's ability in extracting structural information by employing random node features. Numerical experiments indicate that SGAT performs better than other current state-of-the-art heterogeneous graph learning methods.
Graph signal processing is a framework to handle graph structured data. The fundamental concept is graph shift operator, giving rise to the graph Fourier transform. While the graph Fourier transform is a centralized procedure, distributed graph signal processing algorithms are needed to address challenges such as scalability and privacy. In this paper, we develop a theory of distributed graph signal processing based on the classical notion of message passing. However, we generalize the definition of a message to permit more abstract mathematical objects. The framework provides an alternative point of view that avoids the iterative nature of existing approaches to distributed graph signal processing. Moreover, our framework facilitates investigating theoretical questions such as solubility of distributed problems.
Graph signal processing (GSP) is a framework to analyze and process graph structured data. Many research works focus on developing tools such as Graph Fourier transforms (GFT), filters and neural network models to handle graph signals. Such approaches have successfully taken care of "signal processing" in many circumstances. In this paper, we want to put emphasis on "graph signals" themselves. Although there are characterizations of graph signals using the notion of bandwidth derived from GFT, we want to argue here that graph signals may contain hidden geometric information of the network, independent of (graph) Fourier theories. We shall provide a framework to understand such information, and demonstrate how new knowledge on "graph signals" can help with "signal processing".
Sampling and interpolation have been extensively studied, in order to reconstruct or estimate the entire graph signal from the signal values on a subset of vertexes, of which most achievements are about continuous signals. While in a lot of signal processing tasks, signals are not fully observed, and only the signs of signals are available, for example a rating system may only provide several simple options. In this paper, the reconstruction of band-limited graph signals based on sign sampling is discussed and a greedy sampling strategy is proposed. The simulation experiments are presented, and the greedy sampling algorithm is compared with random sampling algorithm, which verify the validity of the proposed approach.
Graph signal processing (GSP) uses a shift operator to define a Fourier basis for the set of graph signals. The shift operator is often chosen to capture the graph topology. However, in many applications, the graph topology maybe unknown a priori, its structure uncertain, or generated randomly from a predefined set for each observation. Each graph topology gives rise to a different shift operator. In this paper, we develop a GSP framework over a probability space of shift operators. We develop the corresponding notions of Fourier transform, convolution, and band-pass filters in this framework, which subsumes traditional GSP theory as the special case where the probability space consists of a single shift operator. We show that a convolution filter under this framework is the expectation of random convolution filters in traditional GSP, while the notion of bandlimitedness requires additional wriggle room from being simply a fixed point of a band-pass filter. We develop a mechanism that facilitates mapping from one space of shift operators to another, which allows our framework to be applied to a rich set of scenarios. Numerical results on both synthetic and real datasets verify the superiority of performing GSP over a probability space of shift operators versus restricting to a single shift operator.
Pre-trained Language Models (PLMs) have achieved great success on Machine Reading Comprehension (MRC) over the past few years. Although the general language representation learned from large-scale corpora does benefit MRC, the poor support in evidence extraction which requires reasoning across multiple sentences hinders PLMs from further advancing MRC. To bridge the gap between general PLMs and MRC, we present REPT, a REtrieval-based Pre-Training approach. In particular, we introduce two self-supervised tasks to strengthen evidence extraction during pre-training, which is further inherited by downstream MRC tasks through the consistent retrieval operation and model architecture. To evaluate our proposed method, we conduct extensive experiments on five MRC datasets that require collecting evidence from and reasoning across multiple sentences. Experimental results demonstrate the effectiveness of our pre-training approach. Moreover, further analysis shows that our approach is able to enhance the capacity of evidence extraction without explicit supervision.
Pre-trained Language Models (PLMs) have achieved great success on Machine Reading Comprehension (MRC) over the past few years. Although the general language representation learned from large-scale corpora does benefit MRC, the poor support in evidence extraction which requires reasoning across multiple sentences hinders PLMs from further advancing MRC. To bridge the gap between general PLMs and MRC, we present REPT, a REtrieval-based Pre-Training approach. In particular, we introduce two self-supervised tasks to strengthen evidence extraction during pre-training, which is further inherited by downstream MRC tasks through the consistent retrieval operation and model architecture. To evaluate our proposed method, we conduct extensive experiments on five MRC datasets that require collecting evidence from and reasoning across multiple sentences. Experimental results demonstrate the effectiveness of our pre-training approach. Moreover, further analysis shows that our approach is able to enhance the capacity of evidence extraction without explicit supervision.
A number of studies point out that current Visual Question Answering (VQA) models are severely affected by the language prior problem, which refers to blindly making predictions based on the language shortcut. Some efforts have been devoted to overcoming this issue with delicate models. However, there is no research to address it from the angle of the answer feature space learning, despite of the fact that existing VQA methods all cast VQA as a classification task. Inspired by this, in this work, we attempt to tackle the language prior problem from the viewpoint of the feature space learning. To this end, an adapted margin cosine loss is designed to discriminate the frequent and the sparse answer feature space under each question type properly. As a result, the limited patterns within the language modality are largely reduced, thereby less language priors would be introduced by our method. We apply this loss function to several baseline models and evaluate its effectiveness on two VQA-CP benchmarks. Experimental results demonstrate that our adapted margin cosine loss can greatly enhance the baseline models with an absolute performance gain of 15\% on average, strongly verifying the potential of tackling the language prior problem in VQA from the angle of the answer feature space learning.
A heterogeneous graph consists of different vertices and edges types. Learning on heterogeneous graphs typically employs meta-paths to deal with the heterogeneity by reducing the graph to a homogeneous network, guide random walks or capture semantics. These methods are however sensitive to the choice of meta-paths, with suboptimal paths leading to poor performance. In this paper, we propose an approach for learning on heterogeneous graphs without using meta-paths. Specifically, we decompose a heterogeneous graph into different homogeneous relation-type graphs, which are then combined to create higher-order relation-type representations. These representations preserve the heterogeneity of edges and retain their edge directions while capturing the interaction of different vertex types multiple hops apart. This is then complemented with attention mechanisms to distinguish the importance of the relation-type based neighbors and the relation-types themselves. Experiments demonstrate that our model generally outperforms other state-of-the-art baselines in the vertex classification task on three commonly studied heterogeneous graph datasets.