Alert button
Picture for Yuan-Fang Li

Yuan-Fang Li

Alert button

ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning

Sep 13, 2023
Linhao Luo, Jiaxin Ju, Bo Xiong, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan

Figure 1 for ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning
Figure 2 for ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning
Figure 3 for ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning
Figure 4 for ChatRule: Mining Logical Rules with Large Language Models for Knowledge Graph Reasoning

Logical rules are essential for uncovering the logical connections between relations, which could improve the reasoning performance and provide interpretable results on knowledge graphs (KGs). Although there have been many efforts to mine meaningful logical rules over KGs, existing methods suffer from the computationally intensive searches over the rule space and a lack of scalability for large-scale KGs. Besides, they often ignore the semantics of relations which is crucial for uncovering logical connections. Recently, large language models (LLMs) have shown impressive performance in the field of natural language processing and various applications, owing to their emergent ability and generalizability. In this paper, we propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs. Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs to prompt LLMs to generate logical rules. To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs. Last, a rule validator harnesses the reasoning ability of LLMs to validate the logical correctness of ranked rules through chain-of-thought reasoning. ChatRule is evaluated on four large-scale KGs, w.r.t. different rule quality metrics and downstream tasks, showing the effectiveness and scalability of our method.

* 11 pages, 4 figures 
Viaarxiv icon

Generating Faithful Text From a Knowledge Graph with Noisy Reference Text

Aug 12, 2023
Tahsina Hashem, Weiqing Wang, Derry Tanti Wijaya, Mohammed Eunus Ali, Yuan-Fang Li

Figure 1 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 2 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 3 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text
Figure 4 for Generating Faithful Text From a Knowledge Graph with Noisy Reference Text

Knowledge Graph (KG)-to-Text generation aims at generating fluent natural-language text that accurately represents the information of a given knowledge graph. While significant progress has been made in this task by exploiting the power of pre-trained language models (PLMs) with appropriate graph structure-aware modules, existing models still fall short of generating faithful text, especially when the ground-truth natural-language text contains additional information that is not present in the graph. In this paper, we develop a KG-to-text generation model that can generate faithful natural-language text from a given graph, in the presence of noisy reference text. Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text, thereby encouraging the decoder to generate text that aligns with the input graph. Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique. We evaluate our model's performance through the standard quantitative metrics as well as a ChatGPT-based quantitative and qualitative analysis. Our evaluation demonstrates the superior performance of our model over state-of-the-art KG-to-text models on faithfulness.

Viaarxiv icon

Informative Scene Graph Generation via Debiasing

Aug 10, 2023
Lianli Gao, Xinyu Lyu, Yuyu Guo, Yuxuan Hu, Yuan-Fang Li, Lu Xu, Heng Tao Shen, Jingkuan Song

Figure 1 for Informative Scene Graph Generation via Debiasing
Figure 2 for Informative Scene Graph Generation via Debiasing
Figure 3 for Informative Scene Graph Generation via Debiasing
Figure 4 for Informative Scene Graph Generation via Debiasing

Scene graph generation aims to detect visual relationship triplets, (subject, predicate, object). Due to biases in data, current models tend to predict common predicates, e.g. "on" and "at", instead of informative ones, e.g. "standing on" and "looking at". This tendency results in the loss of precise information and overall performance. If a model only uses "stone on road" rather than "stone blocking road" to describe an image, it may be a grave misunderstanding. We argue that this phenomenon is caused by two imbalances: semantic space level imbalance and training sample level imbalance. For this problem, we propose DB-SGG, an effective framework based on debiasing but not the conventional distribution fitting. It integrates two components: Semantic Debiasing (SD) and Balanced Predicate Learning (BPL), for these imbalances. SD utilizes a confusion matrix and a bipartite graph to construct predicate relationships. BPL adopts a random undersampling strategy and an ambiguity removing strategy to focus on informative predicates. Benefiting from the model-agnostic process, our method can be easily applied to SGG models and outperforms Transformer by 136.3%, 119.5%, and 122.6% on mR@20 at three SGG sub-tasks on the SGG-VG dataset. Our method is further verified on another complex SGG dataset (SGG-GQA) and two downstream tasks (sentence-to-graph retrieval and image captioning).

* arXiv admin note: substantial text overlap with arXiv:2108.13129 
Viaarxiv icon

NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery

May 26, 2023
Farhad Moghimifar, Shilin Qu, Tongtong Wu, Yuan-Fang Li, Gholamreza Haffari

Figure 1 for NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery
Figure 2 for NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery
Figure 3 for NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery
Figure 4 for NormMark: A Weakly Supervised Markov Model for Socio-cultural Norm Discovery

Norms, which are culturally accepted guidelines for behaviours, can be integrated into conversational models to generate utterances that are appropriate for the socio-cultural context. Existing methods for norm recognition tend to focus only on surface-level features of dialogues and do not take into account the interactions within a conversation. To address this issue, we propose NormMark, a probabilistic generative Markov model to carry the latent features throughout a dialogue. These features are captured by discrete and continuous latent variables conditioned on the conversation history, and improve the model's ability in norm recognition. The model is trainable on weakly annotated data using the variational technique. On a dataset with limited norm annotations, we show that our approach achieves higher F1 score, outperforming current state-of-the-art methods, including GPT3.

Viaarxiv icon

How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?

May 11, 2023
Ming Jin, Guangsi Shi, Yuan-Fang Li, Qingsong Wen, Bo Xiong, Tian Zhou, Shirui Pan

Figure 1 for How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?
Figure 2 for How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?
Figure 3 for How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?
Figure 4 for How Expressive are Spectral-Temporal Graph Neural Networks for Time Series Forecasting?

Spectral-temporal graph neural network is a promising abstraction underlying most time series forecasting models that are based on graph neural networks (GNNs). However, more is needed to know about the underpinnings of this branch of methods. In this paper, we establish a theoretical framework that unravels the expressive power of spectral-temporal GNNs. Our results show that linear spectral-temporal GNNs are universal under mild assumptions, and their expressive power is bounded by our extended first-order Weisfeiler-Leman algorithm on discrete-time dynamic graphs. To make our findings useful in practice on valid instantiations, we discuss related constraints in detail and outline a theoretical blueprint for designing spatial and temporal modules in spectral domains. Building on these insights and to demonstrate how powerful spectral-temporal GNNs are based on our framework, we propose a simple instantiation named Temporal Graph GegenConv (TGC), which significantly outperforms most existing models with only linear components and shows better model efficiency.

Viaarxiv icon

Toward the Automated Construction of Probabilistic Knowledge Graphs for the Maritime Domain

May 04, 2023
Fatemeh Shiri, Teresa Wang, Shirui Pan, Xiaojun Chang, Yuan-Fang Li, Reza Haffari, Van Nguyen, Shuang Yu

Figure 1 for Toward the Automated Construction of Probabilistic Knowledge Graphs for the Maritime Domain
Figure 2 for Toward the Automated Construction of Probabilistic Knowledge Graphs for the Maritime Domain
Figure 3 for Toward the Automated Construction of Probabilistic Knowledge Graphs for the Maritime Domain
Figure 4 for Toward the Automated Construction of Probabilistic Knowledge Graphs for the Maritime Domain

International maritime crime is becoming increasingly sophisticated, often associated with wider criminal networks. Detecting maritime threats by means of fusing data purely related to physical movement (i.e., those generated by physical sensors, or hard data) is not sufficient. This has led to research and development efforts aimed at combining hard data with other types of data (especially human-generated or soft data). Existing work often assumes that input soft data is available in a structured format, or is focused on extracting certain relevant entities or concepts to accompany or annotate hard data. Much less attention has been given to extracting the rich knowledge about the situations of interest implicitly embedded in the large amount of soft data existing in unstructured formats (such as intelligence reports and news articles). In order to exploit the potentially useful and rich information from such sources, it is necessary to extract not only the relevant entities and concepts but also their semantic relations, together with the uncertainty associated with the extracted knowledge (i.e., in the form of probabilistic knowledge graphs). This will increase the accuracy of and confidence in, the extracted knowledge and facilitate subsequent reasoning and learning. To this end, we propose Maritime DeepDive, an initial prototype for the automated construction of probabilistic knowledge graphs from natural language data for the maritime domain. In this paper, we report on the current implementation of Maritime DeepDive, together with preliminary results on extracting probabilistic events from maritime piracy incidents. This pipeline was evaluated on a manually crafted gold standard, yielding promising results.

Viaarxiv icon

Few-shot Domain-Adaptive Visually-fused Event Detection from Text

May 04, 2023
Fatemeh Shiri, Farhad Moghimifar, Van Nguyen, Reza Haffari, Yuan-Fang Li

Figure 1 for Few-shot Domain-Adaptive Visually-fused Event Detection from Text
Figure 2 for Few-shot Domain-Adaptive Visually-fused Event Detection from Text
Figure 3 for Few-shot Domain-Adaptive Visually-fused Event Detection from Text
Figure 4 for Few-shot Domain-Adaptive Visually-fused Event Detection from Text

Incorporating auxiliary modalities such as images into event detection models has attracted increasing interest over the last few years. The complexity of natural language in describing situations has motivated researchers to leverage the related visual context to improve event detection performance. However, current approaches in this area suffer from data scarcity, where a large amount of labelled text-image pairs are required for model training. Furthermore, limited access to the visual context at inference time negatively impacts the performance of such models, which makes them practically ineffective in real-world scenarios. In this paper, we present a novel domain-adaptive visually-fused event detection approach that can be trained on a few labelled image-text paired data points. Specifically, we introduce a visual imaginator method that synthesises images from text in the absence of visual context. Moreover, the imaginator can be customised to a specific domain. In doing so, our model can leverage the capabilities of pre-trained vision-language models and can be trained in a few-shot setting. This also allows for effective inference where only single-modality data (i.e. text) is available. The experimental evaluation on the benchmark M2E2 dataset shows that our model outperforms existing state-of-the-art models, by up to 11 points.

Viaarxiv icon

Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph Completion

Apr 17, 2023
Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, Shirui Pan

Figure 1 for Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph Completion
Figure 2 for Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph Completion
Figure 3 for Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph Completion
Figure 4 for Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph Completion

Knowledge graphs (KGs), as a structured form of knowledge representation, have been widely applied in the real world. Recently, few-shot knowledge graph completion (FKGC), which aims to predict missing facts for unseen relations with few-shot associated facts, has attracted increasing attention from practitioners and researchers. However, existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems. Meanwhile, they are incompetent at estimating uncertainties in predictions, which is critically important as model predictions could be very unreliable in few-shot settings. Furthermore, most of them cannot handle complex relations and ignore path information in KGs, which largely limits their performance. In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC). Specifically, we unify normalizing flows and neural processes to model a complex distribution of KG completion functions. This offers a novel way to predict facts for few-shot relations while estimating the uncertainty. Then, we propose a stochastic ManifoldE decoder to incorporate the neural process and handle complex relations in few-shot settings. To further improve performance, we introduce an attentive relation path-based graph neural network to capture path information in KGs. Extensive experiments on three public datasets demonstrate that our method significantly outperforms the existing FKGC methods and achieves state-of-the-art performance. Code is available at https://github.com/RManLuo/NP-FKGC.git.

* SIGIR 2023  
* Accepted by SIGIR2023 
Viaarxiv icon

On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Feb 06, 2023
Terry Yue Zhuo, Zhuang Li, Yujin Huang, Fatemeh Shiri, Weiqing Wang, Gholamreza Haffari, Yuan-Fang Li

Figure 1 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 2 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 3 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex
Figure 4 for On Robustness of Prompt-based Semantic Parsing with Large Pre-trained Language Model: An Empirical Study on Codex

Semantic parsing is a technique aimed at constructing a structured representation of the meaning of a natural-language question. Recent advancements in few-shot language models trained on code have demonstrated superior performance in generating these representations compared to traditional unimodal language models, which are trained on downstream tasks. Despite these advancements, existing fine-tuned neural semantic parsers are susceptible to adversarial attacks on natural-language inputs. While it has been established that the robustness of smaller semantic parsers can be enhanced through adversarial training, this approach is not feasible for large language models in real-world scenarios, as it requires both substantial computational resources and expensive human annotation on in-domain semantic parsing data. This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, \codex. Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples. To address this challenge, we propose methods for improving robustness without the need for significant amounts of labeled data or heavy computational resources.

* Accepted at EACL2023 (main) 
Viaarxiv icon