Alert button
Picture for Zhenzhou Ji

Zhenzhou Ji

Alert button

Pre-training Language Models with Deterministic Factual Knowledge

Oct 20, 2022
Shaobo Li, Xiaoguang Li, Lifeng Shang, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu

Figure 1 for Pre-training Language Models with Deterministic Factual Knowledge
Figure 2 for Pre-training Language Models with Deterministic Factual Knowledge
Figure 3 for Pre-training Language Models with Deterministic Factual Knowledge
Figure 4 for Pre-training Language Models with Deterministic Factual Knowledge

Previous works show that Pre-trained Language Models (PLMs) can capture factual knowledge. However, some analyses reveal that PLMs fail to perform it robustly, e.g., being sensitive to the changes of prompts when extracting factual knowledge. To mitigate this issue, we propose to let PLMs learn the deterministic relationship between the remaining context and the masked content. The deterministic relationship ensures that the masked factual content can be deterministically inferable based on the existing clues in the context. That would provide more stable patterns for PLMs to capture factual knowledge than randomly masking. Two pre-training tasks are further introduced to motivate PLMs to rely on the deterministic relationship when filling masks. Specifically, we use an external Knowledge Base (KB) to identify deterministic relationships and continuously pre-train PLMs with the proposed methods. The factual knowledge probing experiments indicate that the continuously pre-trained PLMs achieve better robustness in factual knowledge capturing. Further experiments on question-answering datasets show that trying to learn a deterministic relationship with the proposed methods can also help other knowledge-intensive tasks.

* Accepted at EMNLP 2022 
Viaarxiv icon

How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis

Mar 31, 2022
Shaobo Li, Xiaoguang Li, Lifeng Shang, Zhenhua Dong, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Xin Jiang, Qun Liu

Figure 1 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 2 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 3 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 4 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis

Recently, there has been a trend to investigate the factual knowledge captured by Pre-trained Language Models (PLMs). Many works show the PLMs' ability to fill in the missing factual words in cloze-style prompts such as "Dante was born in [MASK]." However, it is still a mystery how PLMs generate the results correctly: relying on effective clues or shortcut patterns? We try to answer this question by a causal-inspired analysis that quantitatively measures and evaluates the word-level patterns that PLMs depend on to generate the missing words. We check the words that have three typical associations with the missing words: knowledge-dependent, positionally close, and highly co-occurred. Our analysis shows: (1) PLMs generate the missing factual words more by the positionally close and highly co-occurred words than the knowledge-dependent words; (2) the dependence on the knowledge-dependent words is more effective than the positionally close and highly co-occurred words. Accordingly, we conclude that the PLMs capture the factual knowledge ineffectively because of depending on the inadequate associations.

* Accepted at Findings of ACL 2022 
Viaarxiv icon

Integrating Regular Expressions with Neural Networks via DFA

Sep 07, 2021
Shaobo Li, Qun Liu, Xin Jiang, Yichun Yin, Chengjie Sun, Bingquan Liu, Zhenzhou Ji, Lifeng Shang

Figure 1 for Integrating Regular Expressions with Neural Networks via DFA
Figure 2 for Integrating Regular Expressions with Neural Networks via DFA
Figure 3 for Integrating Regular Expressions with Neural Networks via DFA
Figure 4 for Integrating Regular Expressions with Neural Networks via DFA

Human-designed rules are widely used to build industry applications. However, it is infeasible to maintain thousands of such hand-crafted rules. So it is very important to integrate the rule knowledge into neural networks to build a hybrid model that achieves better performance. Specifically, the human-designed rules are formulated as Regular Expressions (REs), from which the equivalent Minimal Deterministic Finite Automatons (MDFAs) are constructed. We propose to use the MDFA as an intermediate model to capture the matched RE patterns as rule-based features for each input sentence and introduce these additional features into neural networks. We evaluate the proposed method on the ATIS intent classification task. The experiment results show that the proposed method achieves the best performance compared to neural networks and four other methods that combine REs and neural networks when the training dataset is relatively small.

Viaarxiv icon

HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions

Dec 31, 2020
Shaobo Li, Xiaoguang Li, Lifeng Shang, Xin Jiang, Qun Liu, Chengjie Sun, Zhenzhou Ji, Bingquan Liu

Figure 1 for HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
Figure 2 for HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
Figure 3 for HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions
Figure 4 for HopRetriever: Retrieve Hops over Wikipedia to Answer Complex Questions

Collecting supporting evidence from large corpora of text (e.g., Wikipedia) is of great challenge for open-domain Question Answering (QA). Especially, for multi-hop open-domain QA, scattered evidence pieces are required to be gathered together to support the answer extraction. In this paper, we propose a new retrieval target, hop, to collect the hidden reasoning evidence from Wikipedia for complex question answering. Specifically, the hop in this paper is defined as the combination of a hyperlink and the corresponding outbound link document. The hyperlink is encoded as the mention embedding which models the structured knowledge of how the outbound link entity is mentioned in the textual context, and the corresponding outbound link document is encoded as the document embedding representing the unstructured knowledge within it. Accordingly, we build HopRetriever which retrieves hops over Wikipedia to answer complex questions. Experiments on the HotpotQA dataset demonstrate that HopRetriever outperforms previously published evidence retrieval methods by large margins. Moreover, our approach also yields quantifiable interpretations of the evidence collection process.

* Accepted at AAAI 2021 
Viaarxiv icon

Radical-Enhanced Chinese Character Embedding

Apr 18, 2014
Yaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhenzhou Ji, Xiaolong Wang

Figure 1 for Radical-Enhanced Chinese Character Embedding
Figure 2 for Radical-Enhanced Chinese Character Embedding
Figure 3 for Radical-Enhanced Chinese Character Embedding
Figure 4 for Radical-Enhanced Chinese Character Embedding

We present a method to leverage radical for learning Chinese character embedding. Radical is a semantic and phonetic component of Chinese character. It plays an important role as characters with the same radical usually have similar semantic meaning and grammatical usage. However, existing Chinese processing algorithms typically regard word or character as the basic unit but ignore the crucial radical information. In this paper, we fill this gap by leveraging radical for learning continuous representation of Chinese character. We develop a dedicated neural architecture to effectively learn character embedding and apply it on Chinese character similarity judgement and Chinese word segmentation. Experiment results show that our radical-enhanced method outperforms existing embedding learning algorithms on both tasks.

* 8 pages, 4 figures 
Viaarxiv icon