Alert button
Picture for Weiqi Wang

Weiqi Wang

Alert button

Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints

May 30, 2023
Jiaxin Bai, Xin Liu, Weiqi Wang, Chen Luo, Yangqiu Song

Figure 1 for Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Figure 2 for Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Figure 3 for Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Figure 4 for Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints

Querying incomplete knowledge graphs (KGs) using deep learning approaches can naturally leverage the reasoning and generalization ability to learn to infer better answers. Traditional neural complex query answering (CQA) approaches mostly work on entity-centric KGs. However, in the real world, we also need to make logical inferences about events, states, and activities (i.e., eventualities or situations) to push learning systems from System I to System II, as proposed by Yoshua Bengio. Querying logically from an EVentuality-centric KG (EVKG) can naturally provide references to such kind of intuitive and logical inference. Thus, in this paper, we propose a new framework to leverage neural methods to answer complex logical queries based on an EVKG, which can satisfy not only traditional first-order logic constraints but also implicit logical constraints over eventualities concerning their occurrences and orders. For instance, if we know that ``Food is bad'' happens before ``PersonX adds soy sauce,'' then ``PersonX adds soy sauce'' is unlikely to be the cause of ``Food is bad'' due to implicit temporal constraint. To facilitate consistent reasoning on EVKGs, we propose Complex Eventuality Query Answering (CEQA), a more rigorous definition of CQA that considers the implicit logical constraints governing the temporal order and occurrence of eventualities. In this manner, we propose to leverage theorem provers for constructing benchmark datasets to ensure the answers satisfy implicit logical constraints. We also propose a Memory-Enhanced Query Encoding (MEQE) approach to significantly improve the performance of state-of-the-art neural query encoders on the CEQA task.

Viaarxiv icon

CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering

May 24, 2023
Weiqi Wang, Tianqing Fang, Wenxuan Ding, Baixuan Xu, Xin Liu, Yangqiu Song, Antoine Bosselut

Figure 1 for CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering
Figure 2 for CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering
Figure 3 for CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering
Figure 4 for CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering

The task of zero-shot commonsense question answering evaluates models on their capacity to reason about general scenarios beyond those presented in specific datasets. Existing approaches for tackling this task leverage external knowledge from CommonSense Knowledge Bases (CSKBs) by pretraining the model on synthetic QA pairs constructed from CSKBs. In these approaches, negative examples (distractors) are formulated by randomly sampling from CSKBs using fairly primitive keyword constraints. However, two bottlenecks limit these approaches: the inherent incompleteness of CSKBs limits the semantic coverage of synthetic QA pairs, and the lack of human annotations makes the sampled negative examples potentially uninformative and contradictory. To tackle these limitations above, we propose Conceptualization-Augmented Reasoner (CAR), a zero-shot commonsense question-answering framework that fully leverages the power of conceptualization. Specifically, CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CSKB and expands the ground-truth answer space, reducing the likelihood of selecting false-negative distractors. Extensive experiments demonstrate that CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods, including large language models, such as GPT3.5 and ChatGPT. Our codes, data, and model checkpoints are available at https://github.com/HKUST-KnowComp/CAR.

Viaarxiv icon

ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations

May 11, 2023
Chunkit Chan, Jiayang Cheng, Weiqi Wang, Yuxin Jiang, Tianqing Fang, Xin Liu, Yangqiu Song

Figure 1 for ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations
Figure 2 for ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations
Figure 3 for ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations
Figure 4 for ChatGPT Evaluation on Sentence Level Relations: A Focus on Temporal, Causal, and Discourse Relations

This paper aims to quantitatively evaluate the performance of ChatGPT, an interactive large language model, on inter-sentential relations such as temporal relations, causal relations, and discourse relations. Given ChatGPT's promising performance across various tasks, we conduct extensive evaluations on the whole test sets of 13 datasets, including temporal and causal relations, PDTB2.0-based and dialogue-based discourse relations, and downstream applications on discourse understanding. To achieve reliable results, we adopt three tailored prompt templates for each task, including the zero-shot prompt template, zero-shot prompt engineering (PE) template, and in-context learning (ICL) prompt template, to establish the initial baseline scores for all popular sentence-pair relation classification tasks for the first time. We find that ChatGPT exhibits strong performance in detecting and reasoning about causal relations, while it may not be proficient in identifying the temporal order between two events. It can recognize most discourse relations with existing explicit discourse connectives, but the implicit discourse relation still remains a challenging task. Meanwhile, ChatGPT performs poorly in the dialogue discourse parsing task that requires structural understanding in a dialogue before being aware of the discourse relation.

* 37 pages 
Viaarxiv icon

CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning

May 10, 2023
Weiqi Wang, Tianqing Fang, Baixuan Xu, Chun Yi Louis Bo, Yangqiu Song, Lei Chen

Figure 1 for CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning
Figure 2 for CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning
Figure 3 for CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning
Figure 4 for CAT: A Contextualized Conceptualization and Instantiation Framework for Commonsense Reasoning

Commonsense reasoning, aiming at endowing machines with a human-like ability to make situational presumptions, is extremely challenging to generalize. For someone who barely knows about "meditation," while is knowledgeable about "singing," he can still infer that "meditation makes people relaxed" from the existing knowledge that "singing makes people relaxed" by first conceptualizing "singing" as a "relaxing event" and then instantiating that event to "meditation." This process, known as conceptual induction and deduction, is fundamental to commonsense reasoning while lacking both labeled data and methodologies to enhance commonsense modeling. To fill such a research gap, we propose CAT (Contextualized ConceptuAlization and InsTantiation), a semi-supervised learning framework that integrates event conceptualization and instantiation to conceptualize commonsense knowledge bases at scale. Extensive experiments show that our framework achieves state-of-the-art performances on two conceptualization tasks, and the acquired abstract commonsense knowledge can significantly improve commonsense inference modeling. Our code, data, and fine-tuned models are publicly available at https://github.com/HKUST-KnowComp/CAT.

* ACL2023 Main Conference 
Viaarxiv icon

COLA: Contextualized Commonsense Causal Reasoning from the Causal Inference Perspective

May 09, 2023
Zhaowei Wang, Quyet V. Do, Hongming Zhang, Jiayao Zhang, Weiqi Wang, Tianqing Fang, Yangqiu Song, Ginny Y. Wong, Simon See

Figure 1 for COLA: Contextualized Commonsense Causal Reasoning from the Causal Inference Perspective
Figure 2 for COLA: Contextualized Commonsense Causal Reasoning from the Causal Inference Perspective
Figure 3 for COLA: Contextualized Commonsense Causal Reasoning from the Causal Inference Perspective
Figure 4 for COLA: Contextualized Commonsense Causal Reasoning from the Causal Inference Perspective

Detecting commonsense causal relations (causation) between events has long been an essential yet challenging task. Given that events are complicated, an event may have different causes under various contexts. Thus, exploiting context plays an essential role in detecting causal relations. Meanwhile, previous works about commonsense causation only consider two events and ignore their context, simplifying the task formulation. This paper proposes a new task to detect commonsense causation between two events in an event sequence (i.e., context), called contextualized commonsense causal reasoning. We also design a zero-shot framework: COLA (Contextualized Commonsense Causality Reasoner) to solve the task from the causal inference perspective. This framework obtains rich incidental supervision from temporality and balances covariates from multiple timestamps to remove confounding effects. Our extensive experiments show that COLA can detect commonsense causality more accurately than baselines.

* Accepted to the main conference of ACL 2023 
Viaarxiv icon

CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge Base Population

Apr 20, 2023
Tianqing Fang, Quyet V. Do, Sehyun Choi, Weiqi Wang, Yangqiu Song

Figure 1 for CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge Base Population
Figure 2 for CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge Base Population
Figure 3 for CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge Base Population
Figure 4 for CKBP v2: An Expert-Annotated Evaluation Set for Commonsense Knowledge Base Population

Populating Commonsense Knowledge Bases (CSKB) is an important yet hard task in NLP, as it tackles knowledge from external sources with unseen events and entities. Fang et al. (2021a) proposed a CSKB Population benchmark with an evaluation set CKBP v1. However, CKBP v1 adopts crowdsourced annotations that suffer from a substantial fraction of incorrect answers, and the evaluation set is not well-aligned with the external knowledge source as a result of random sampling. In this paper, we introduce CKBP v2, a new high-quality CSKB Population benchmark, which addresses the two mentioned problems by using experts instead of crowd-sourced annotation and by adding diversified adversarial samples to make the evaluation set more representative. We conduct extensive experiments comparing state-of-the-art methods for CSKB Population on the new evaluation set for future research comparisons. Empirical results show that the population task is still challenging, even for large language models (LLM) such as ChatGPT. Codes and data are available at https://github.com/HKUST-KnowComp/CSKB-Population.

Viaarxiv icon

Rearrange Indoor Scenes for Human-Robot Co-Activity

Mar 10, 2023
Weiqi Wang, Zihang Zhao, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

Figure 1 for Rearrange Indoor Scenes for Human-Robot Co-Activity
Figure 2 for Rearrange Indoor Scenes for Human-Robot Co-Activity
Figure 3 for Rearrange Indoor Scenes for Human-Robot Co-Activity
Figure 4 for Rearrange Indoor Scenes for Human-Robot Co-Activity

We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better. The rearrangement aims to afford sufficient accessible space for robot activities without compromising everyday human activities. To retain human activities, our algorithm preserves the functional relations among furniture by integrating spatial and semantic co-occurrence extracted from SUNCG and ConceptNet, respectively. By defining the robot's accessible space by the amount of open space it can traverse and the number of objects it can reach, we formulate the rearrangement for human-robot co-activity as an optimization problem, solved by adaptive simulated annealing (ASA) and covariance matrix adaptation evolution strategy (CMA-ES). Our experiments on the SUNCG dataset quantitatively show that rearranged scenes provide an average of 14% more accessible space and 30% more objects to interact with. The quality of the rearranged scenes is qualitatively validated by a human study, indicating the efficacy of the proposed strategy.

* 7 pages, 7 figures; Accepted by ICRA 2023 
Viaarxiv icon

FolkScope: Intention Knowledge Graph Construction for Discovering E-commerce Commonsense

Nov 15, 2022
Changlong Yu, Weiqi Wang, Xin Liu, Jiaxin Bai, Yangqiu Song, Zheng Li, Yifan Gao, Tianyu Cao, Bing Yin

Figure 1 for FolkScope: Intention Knowledge Graph Construction for Discovering E-commerce Commonsense
Figure 2 for FolkScope: Intention Knowledge Graph Construction for Discovering E-commerce Commonsense
Figure 3 for FolkScope: Intention Knowledge Graph Construction for Discovering E-commerce Commonsense
Figure 4 for FolkScope: Intention Knowledge Graph Construction for Discovering E-commerce Commonsense

As stated by Oren Etzioni, ``commonsense is the dark matter of artificial intelligence''. In e-commerce, understanding users' needs or intentions requires substantial commonsense knowledge, e.g., ``A user bought an iPhone and a compatible case because the user wanted the phone to be protected''. In this paper, we present FolkScope, an intention knowledge graph construction framework, to reveal the structure of humans' minds about purchasing items on e-commerce platforms such as Amazon. As commonsense knowledge is usually ineffable and not expressed explicitly, it is challenging to perform any kind of information extraction. Thus, we propose a new approach that leverages the generation power of large-scale language models and human-in-the-loop annotations to semi-automatically construct the knowledge graph. We annotate a large amount of assertions for both plausibility and typicality of an intention that can explain a purchasing or co-purchasing behavior, where the intention can be an open reason or a predicate falling into one of 18 categories aligning with ConceptNet, e.g., IsA, MadeOf, UsedFor, etc. Then we populate the annotated information to all automatically generated ones, and further structurize the assertions using pattern mining and conceptualization to form more condensed and abstractive knowledge. We evaluate our knowledge graph using both intrinsic quality measures and a downstream application, i.e., recommendation. The comprehensive study shows that our knowledge graph can well model e-commerce commonsense knowledge and can have many potential applications.

Viaarxiv icon

Understanding Physical Effects for Effective Tool-use

Jun 30, 2022
Zeyu Zhang, Ziyuan Jiao, Weiqi Wang, Yixin Zhu, Song-Chun Zhu, Hangxin Liu

Figure 1 for Understanding Physical Effects for Effective Tool-use
Figure 2 for Understanding Physical Effects for Effective Tool-use
Figure 3 for Understanding Physical Effects for Effective Tool-use
Figure 4 for Understanding Physical Effects for Effective Tool-use

We present a robot learning and planning framework that produces an effective tool-use strategy with the least joint efforts, capable of handling objects different from training. Leveraging a Finite Element Method (FEM)-based simulator that reproduces fine-grained, continuous visual and physical effects given observed tool-use events, the essential physical properties contributing to the effects are identified through the proposed Iterative Deepening Symbolic Regression (IDSR) algorithm. We further devise an optimal control-based motion planning scheme to integrate robot- and tool-specific kinematics and dynamics to produce an effective trajectory that enacts the learned properties. In simulation, we demonstrate that the proposed framework can produce more effective tool-use strategies, drastically different from the observed ones in two exemplar tasks.

Viaarxiv icon