Alert button
Picture for Zhixu Li

Zhixu Li

Alert button

AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities

Aug 09, 2023
Jingdan Zhang, Jiaan Wang, Xiaodan Wang, Zhixu Li, Yanghua Xiao

Figure 1 for AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
Figure 2 for AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
Figure 3 for AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities
Figure 4 for AspectMMKG: A Multi-modal Knowledge Graph with Aspect-aware Entities

Multi-modal knowledge graphs (MMKGs) combine different modal data (e.g., text and image) for a comprehensive understanding of entities. Despite the recent progress of large-scale MMKGs, existing MMKGs neglect the multi-aspect nature of entities, limiting the ability to comprehend entities from various perspectives. In this paper, we construct AspectMMKG, the first MMKG with aspect-related images by matching images to different entity aspects. Specifically, we collect aspect-related images from a knowledge base, and further extract aspect-related sentences from the knowledge base as queries to retrieve a large number of aspect-related images via an online image search engine. Finally, AspectMMKG contains 2,380 entities, 18,139 entity aspects, and 645,383 aspect-related images. We demonstrate the usability of AspectMMKG in entity aspect linking (EAL) downstream task and show that previous EAL models achieve a new state-of-the-art performance with the help of AspectMMKG. To facilitate the research on aspect-related MMKG, we further propose an aspect-related image retrieval (AIR) model, that aims to correct and expand aspect-related images in AspectMMKG. We train an AIR model to learn the relationship between entity image and entity aspect-related images by incorporating entity image, aspect, and aspect image information. Experimental results indicate that the AIR model could retrieve suitable images for a given entity w.r.t different aspects.

* Accepted by CIKM 2023 
Viaarxiv icon

Adaptive Ordered Information Extraction with Deep Reinforcement Learning

Jun 19, 2023
Wenhao Huang, Jiaqing Liang, Zhixu Li, Yanghua Xiao, Chuanjun Ji

Figure 1 for Adaptive Ordered Information Extraction with Deep Reinforcement Learning
Figure 2 for Adaptive Ordered Information Extraction with Deep Reinforcement Learning
Figure 3 for Adaptive Ordered Information Extraction with Deep Reinforcement Learning
Figure 4 for Adaptive Ordered Information Extraction with Deep Reinforcement Learning

Information extraction (IE) has been studied extensively. The existing methods always follow a fixed extraction order for complex IE tasks with multiple elements to be extracted in one instance such as event extraction. However, we conduct experiments on several complex IE datasets and observe that different extraction orders can significantly affect the extraction results for a great portion of instances, and the ratio of sentences that are sensitive to extraction orders increases dramatically with the complexity of the IE task. Therefore, this paper proposes a novel adaptive ordered IE paradigm to find the optimal element extraction order for different instances, so as to achieve the best extraction results. We also propose an reinforcement learning (RL) based framework to generate optimal extraction order for each instance dynamically. Additionally, we propose a co-training framework adapted to RL to mitigate the exposure bias during the extractor training phase. Extensive experiments conducted on several public datasets demonstrate that our proposed method can beat previous methods and effectively improve the performance of various IE tasks, especially for complex ones.

* Accepted to ACL 2023 Findings 
Viaarxiv icon

Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled from Foundation Model

Jun 17, 2023
Jiaan Wang, Jianfeng Qu, Yunlong Liang, Zhixu Li, An Liu, Guanfeng Liu, Xin Zheng

Figure 1 for Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled from Foundation Model
Figure 2 for Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled from Foundation Model
Figure 3 for Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled from Foundation Model
Figure 4 for Snowman: A Million-scale Chinese Commonsense Knowledge Graph Distilled from Foundation Model

Constructing commonsense knowledge graphs (CKGs) has attracted wide research attention due to its significant importance in cognitive intelligence. Nevertheless, existing CKGs are typically oriented to English, limiting the research in non-English languages. Meanwhile, the emergence of foundation models like ChatGPT and GPT-4 has shown promising intelligence with the help of reinforcement learning from human feedback. Under the background, in this paper, we utilize foundation models to construct a Chinese CKG, named Snowman. Specifically, we distill different types of commonsense head items from ChatGPT, and continue to use it to collect tail items with respect to the head items and pre-defined relations. Based on the preliminary analysis, we find the negative commonsense knowledge distilled by ChatGPT achieves lower human acceptance compared to other knowledge. Therefore, we design a simple yet effective self-instruct filtering strategy to filter out invalid negative commonsense. Overall, the constructed Snowman covers more than ten million Chinese commonsense triples, making it the largest Chinese CKG. Moreover, human studies show the acceptance of Snowman achieves 90.6\%, indicating the high-quality triples distilled by the cutting-edge foundation model. We also conduct experiments on commonsense knowledge models to show the usability and effectiveness of our Snowman.

* tech report 
Viaarxiv icon

Towards Unifying Multi-Lingual and Cross-Lingual Summarization

May 16, 2023
Jiaan Wang, Fandong Meng, Duo Zheng, Yunlong Liang, Zhixu Li, Jianfeng Qu, Jie Zhou

Figure 1 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 2 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 3 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization
Figure 4 for Towards Unifying Multi-Lingual and Cross-Lingual Summarization

To adapt text summarization to the multilingual world, previous work proposes multi-lingual summarization (MLS) and cross-lingual summarization (CLS). However, these two tasks have been studied separately due to the different definitions, which limits the compatible and systematic research on both of them. In this paper, we aim to unify MLS and CLS into a more general setting, i.e., many-to-many summarization (M2MS), where a single model could process documents in any language and generate their summaries also in any language. As the first step towards M2MS, we conduct preliminary studies to show that M2MS can better transfer task knowledge across different languages than MLS and CLS. Furthermore, we propose Pisces, a pre-trained M2MS model that learns language modeling, cross-lingual ability and summarization ability via three-stage pre-training. Experimental results indicate that our Pisces significantly outperforms the state-of-the-art baselines, especially in the zero-shot directions, where there is no training data from the source-language documents to the target-language summaries.

* Accepted at ACL 2023 as a long paper of the main conference 
Viaarxiv icon

GANTEE: Generative Adversatial Network for Taxonomy Entering Evaluation

Mar 25, 2023
Zhouhong Gu, Sihang Jiang, Jingping Liu, Yanghua Xiao, Hongwei Feng, Zhixu Li, Jiaqing Liang, Jian Zhong

Figure 1 for GANTEE: Generative Adversatial Network for Taxonomy Entering Evaluation
Figure 2 for GANTEE: Generative Adversatial Network for Taxonomy Entering Evaluation
Figure 3 for GANTEE: Generative Adversatial Network for Taxonomy Entering Evaluation
Figure 4 for GANTEE: Generative Adversatial Network for Taxonomy Entering Evaluation

Taxonomy is formulated as directed acyclic concepts graphs or trees that support many downstream tasks. Many new coming concepts need to be added to an existing taxonomy. The traditional taxonomy expansion task aims only at finding the best position for new coming concepts in the existing taxonomy. However, they have two drawbacks when being applied to the real-scenarios. The previous methods suffer from low-efficiency since they waste much time when most of the new coming concepts are indeed noisy concepts. They also suffer from low-effectiveness since they collect training samples only from the existing taxonomy, which limits the ability of the model to mine more hypernym-hyponym relationships among real concepts. This paper proposes a pluggable framework called Generative Adversarial Network for Taxonomy Entering Evaluation (GANTEE) to alleviate these drawbacks. A generative adversarial network is designed in this framework by discriminative models to alleviate the first drawback and the generative model to alleviate the second drawback. Two discriminators are used in GANTEE to provide long-term and short-term rewards, respectively. Moreover, to further improve the efficiency, pre-trained language models are used to retrieve the representation of the concepts quickly. The experiments on three real-world large-scale datasets with two different languages show that GANTEE improves the performance of the existing taxonomy expansion methods in both effectiveness and efficiency.

* Accepted by AAAI 2023 
Viaarxiv icon

Is ChatGPT a Good NLG Evaluator? A Preliminary Study

Mar 07, 2023
Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, Jie Zhou

Figure 1 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 2 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 3 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study
Figure 4 for Is ChatGPT a Good NLG Evaluator? A Preliminary Study

Recently, the emergence of ChatGPT has attracted wide attention from the computational linguistics community. Many prior studies have shown that ChatGPT achieves remarkable performance on various NLP tasks in terms of automatic evaluation metrics. However, the ability of ChatGPT to serve as an evaluation metric is still underexplored. Considering assessing the quality of NLG models is an arduous task and previous statistical metrics notoriously show their poor correlation with human judgments, we wonder whether ChatGPT is a good NLG evaluation metric. In this report, we provide a preliminary meta-evaluation on ChatGPT to show its reliability as an NLG metric. In detail, we regard ChatGPT as a human evaluator and give task-specific (e.g., summarization) and aspect-specific (e.g., relevance) instruction to prompt ChatGPT to score the generation of NLG models. We conduct experiments on three widely-used NLG meta-evaluation datasets (including summarization, story generation and data-to-text tasks). Experimental results show that compared with previous automatic metrics, ChatGPT achieves state-of-the-art or competitive correlation with golden human judgments. We hope our preliminary study could prompt the emergence of a general-purposed reliable NLG metric.

* Technical Report, 8 pages 
Viaarxiv icon

Cross-Lingual Summarization via ChatGPT

Feb 28, 2023
Jiaan Wang, Yunlong Liang, Fandong Meng, Zhixu Li, Jianfeng Qu, Jie Zhou

Figure 1 for Cross-Lingual Summarization via ChatGPT
Figure 2 for Cross-Lingual Summarization via ChatGPT
Figure 3 for Cross-Lingual Summarization via ChatGPT
Figure 4 for Cross-Lingual Summarization via ChatGPT

Given a document in a source language, cross-lingual summarization (CLS) aims to generate a summary in a different target language. Recently, the emergence of ChatGPT has attracted wide attention from the computational linguistics community. However, it is not yet known the performance of ChatGPT on CLS. In this report, we empirically use various prompts to guide ChatGPT to perform zero-shot CLS from different paradigms (i.e., end-to-end and pipeline), and provide a preliminary evaluation on its generated summaries.We find that ChatGPT originally prefers to produce lengthy summaries with more detailed information. But with the help of an interactive prompt, ChatGPT can balance between informativeness and conciseness, and significantly improve its CLS performance. Experimental results on three widely-used CLS datasets show that ChatGPT outperforms the advanced GPT 3.5 model (i.e., text-davinci-003). In addition, we provide qualitative case studies to show the superiority of ChatGPT on CLS.

* Technical Report, 8 pages 
Viaarxiv icon

Understanding Translationese in Cross-Lingual Summarization

Dec 14, 2022
Jiaan Wang, Fandong Meng, Tingyi Zhang, Yunlong Liang, Jiarong Xu, Zhixu Li, Jie Zhou

Figure 1 for Understanding Translationese in Cross-Lingual Summarization
Figure 2 for Understanding Translationese in Cross-Lingual Summarization
Figure 3 for Understanding Translationese in Cross-Lingual Summarization
Figure 4 for Understanding Translationese in Cross-Lingual Summarization

Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS samples, existing datasets typically involve translation in their creation. However, the translated text is distinguished from the text originally written in that language, i.e., translationese. Though many efforts have been devoted to CLS, none of them notice the phenomenon of translationese. In this paper, we first confirm that the different approaches to constructing CLS datasets will lead to different degrees of translationese. Then we design systematic experiments to investigate how translationese affects CLS model evaluation and performance when it appears in source documents or target summaries. In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in the real scene; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies. Furthermore, we give suggestions for future CLS research including dataset and model developments. We hope that our work could let researchers notice the phenomenon of translationese in CLS and take it into account in the future.

* Work in progress 
Viaarxiv icon

Long-Document Cross-Lingual Summarization

Dec 01, 2022
Shaohui Zheng, Zhixu Li, Jiaan Wang, Jianfeng Qu, An Liu, Lei Zhao, Zhigang Chen

Figure 1 for Long-Document Cross-Lingual Summarization
Figure 2 for Long-Document Cross-Lingual Summarization
Figure 3 for Long-Document Cross-Lingual Summarization
Figure 4 for Long-Document Cross-Lingual Summarization

Cross-Lingual Summarization (CLS) aims at generating summaries in one language for the given documents in another language. CLS has attracted wide research attention due to its practical significance in the multi-lingual world. Though great contributions have been made, existing CLS works typically focus on short documents, such as news articles, short dialogues and guides. Different from these short texts, long documents such as academic articles and business reports usually discuss complicated subjects and consist of thousands of words, making them non-trivial to process and summarize. To promote CLS research on long documents, we construct Perseus, the first long-document CLS dataset which collects about 94K Chinese scientific documents paired with English summaries. The average length of documents in Perseus is more than two thousand tokens. As a preliminary study on long-document CLS, we build and evaluate various CLS baselines, including pipeline and end-to-end methods. Experimental results on Perseus show the superiority of the end-to-end baseline, outperforming the strong pipeline models equipped with sophisticated machine translation systems. Furthermore, to provide a deeper understanding, we manually analyze the model outputs and discuss specific challenges faced by current approaches. We hope that our work could benchmark long-document CLS and benefit future studies.

* Accepted by WSDM 2023 
Viaarxiv icon

Generative Entity Typing with Curriculum Learning

Oct 06, 2022
Siyu Yuan, Deqing Yang, Jiaqing Liang, Zhixu Li, Jinxi Liu, Jingyue Huang, Yanghua Xiao

Figure 1 for Generative Entity Typing with Curriculum Learning
Figure 2 for Generative Entity Typing with Curriculum Learning
Figure 3 for Generative Entity Typing with Curriculum Learning
Figure 4 for Generative Entity Typing with Curriculum Learning

Entity typing aims to assign types to the entity mentions in given texts. The traditional classification-based entity typing paradigm has two unignorable drawbacks: 1) it fails to assign an entity to the types beyond the predefined type set, and 2) it can hardly handle few-shot and zero-shot situations where many long-tail types only have few or even no training instances. To overcome these drawbacks, we propose a novel generative entity typing (GET) paradigm: given a text with an entity mention, the multiple types for the role that the entity plays in the text are generated with a pre-trained language model (PLM). However, PLMs tend to generate coarse-grained types after fine-tuning upon the entity typing dataset. Besides, we only have heterogeneous training data consisting of a small portion of human-annotated data and a large portion of auto-generated but low-quality data. To tackle these problems, we employ curriculum learning (CL) to train our GET model upon the heterogeneous data, where the curriculum could be self-adjusted with the self-paced learning according to its comprehension of the type granularity and data heterogeneity. Our extensive experiments upon the datasets of different languages and downstream tasks justify the superiority of our GET model over the state-of-the-art entity typing models. The code has been released on https://github.com/siyuyuan/GET.

* Accepted to EMNLP 2022 
Viaarxiv icon