Alert button
Picture for Jiarong Xu

Jiarong Xu

Alert button

Better with Less: A Data-Active Perspective on Pre-Training Graph Neural Networks

Nov 21, 2023
Jiarong Xu, Renhong Huang, Xin Jiang, Yuxuan Cao, Carl Yang, Chunping Wang, Yang Yang

Pre-training on graph neural networks (GNNs) aims to learn transferable knowledge for downstream tasks with unlabeled data, and it has recently become an active research area. The success of graph pre-training models is often attributed to the massive amount of input data. In this paper, however, we identify the curse of big data phenomenon in graph pre-training: more training data do not necessarily lead to better downstream performance. Motivated by this observation, we propose a better-with-less framework for graph pre-training: fewer, but carefully chosen data are fed into a GNN model to enhance pre-training. The proposed pre-training pipeline is called the data-active graph pre-training (APT) framework, and is composed of a graph selector and a pre-training model. The graph selector chooses the most representative and instructive data points based on the inherent properties of graphs as well as predictive uncertainty. The proposed predictive uncertainty, as feedback from the pre-training model, measures the confidence level of the model in the data. When fed with the chosen data, on the other hand, the pre-training model grasps an initial understanding of the new, unseen data, and at the same time attempts to remember the knowledge learned from previous data. Therefore, the integration and interaction between these two components form a unified framework (APT), in which graph pre-training is performed in a progressive and iterative way. Experiment results show that the proposed APT is able to obtain an efficient pre-training model with fewer training data and better downstream performance.

Viaarxiv icon

DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning

Oct 25, 2023
Wei Chen, Qiushi Wang, Zefei Long, Xianyin Zhang, Zhongtian Lu, Bingxuan Li, Siyuan Wang, Jiarong Xu, Xiang Bai, Xuanjing Huang, Zhongyu Wei

Figure 1 for DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning
Figure 2 for DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning
Figure 3 for DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning
Figure 4 for DISC-FinLLM: A Chinese Financial Large Language Model based on Multiple Experts Fine-tuning

We propose Multiple Experts Fine-tuning Framework to build a financial large language model (LLM), DISC-FinLLM. Our methodology improves general LLMs by endowing them with multi-turn question answering abilities, domain text processing capabilities, mathematical computation skills, and retrieval-enhanced generation capabilities. We build a financial instruction-tuning dataset named DISC-FIN-SFT, including instruction samples of four categories (consulting, NLP tasks, computing and retrieval-augmented generation). Evaluations conducted on multiple benchmarks demonstrate that our model performs better than baseline models in various financial scenarios. Further resources can be found at https://github.com/FudanDISC/DISC-FinLLM.

* 18 pages, 13 figures, 7 tables 
Viaarxiv icon

Cross-Lingual Knowledge Editing in Large Language Models

Sep 16, 2023
Jiaan Wang, Yunlong Liang, Zengkui Sun, Yuxuan Cao, Jiarong Xu

Figure 1 for Cross-Lingual Knowledge Editing in Large Language Models
Figure 2 for Cross-Lingual Knowledge Editing in Large Language Models
Figure 3 for Cross-Lingual Knowledge Editing in Large Language Models
Figure 4 for Cross-Lingual Knowledge Editing in Large Language Models

Knowledge editing aims to change language models' performance on several special cases (i.e., editing scope) by infusing the corresponding expected knowledge into them. With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch. However, most of the previous studies neglect the multi-lingual nature of some main-stream LLMs (e.g., LLaMA, ChatGPT and GPT-4), and typically focus on monolingual scenarios, where LLMs are edited and evaluated in the same language. As a result, it is still unknown the effect of source language editing on a different target language. In this paper, we aim to figure out this cross-lingual effect in knowledge editing. Specifically, we first collect a large-scale cross-lingual synthetic dataset by translating ZsRE from English to Chinese. Then, we conduct English editing on various knowledge editing methods covering different paradigms, and evaluate their performance in Chinese, and vice versa. To give deeper analyses of the cross-lingual effect, the evaluation includes four aspects, i.e., reliability, generality, locality and portability. Furthermore, we analyze the inconsistent behaviors of the edited models and discuss their specific challenges.

Viaarxiv icon

When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!

Apr 03, 2023
Yuxuan Cao, Jiarong Xu, Carl Yang, Jiaan Wang, Yunchao Zhang, Chunping Wang, Lei Chen, Yang Yang

Figure 1 for When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!
Figure 2 for When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!
Figure 3 for When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!
Figure 4 for When to Pre-Train Graph Neural Networks? An Answer from Data Generation Perspective!

Recently, graph pre-training has attracted wide research attention, which aims to learn transferable knowledge from unlabeled graph data so as to improve downstream performance. Despite these recent attempts, the negative transfer is a major issue when applying graph pre-trained models to downstream tasks. Existing works made great efforts on the issue of what to pre-train and how to pre-train by designing a number of graph pre-training and fine-tuning strategies. However, there are indeed cases where no matter how advanced the strategy is, the "pre-train and fine-tune" paradigm still cannot achieve clear benefits. This paper introduces a generic framework W2PGNN to answer the crucial question of when to pre-train (i.e., in what situations could we take advantage of graph pre-training) before performing effortful pre-training or fine-tuning. We start from a new perspective to explore the complex generative mechanisms from the pre-training data to downstream data. In particular, W2PGNN first fits the pre-training data into graphon bases, each element of graphon basis (i.e., a graphon) identifies a fundamental transferable pattern shared by a collection of pre-training graphs. All convex combinations of graphon bases give rise to a generator space, from which graphs generated form the solution space for those downstream data that can benefit from pre-training. In this manner, the feasibility of pre-training can be quantified as the generation probability of the downstream data from any generator in the generator space. W2PGNN provides three broad applications, including providing the application scope of graph pre-trained models, quantifying the feasibility of performing pre-training, and helping select pre-training data to enhance downstream performance. We give a theoretically sound solution for the first application and extensive empirical justifications for the latter two applications.

* This paper was withdrawn because it was submitted without the consent of one of the co-authors. It does not contain any errors that need to be corrected 
Viaarxiv icon

Unifying Structure Reasoning and Language Model Pre-training for Complex Reasoning

Jan 21, 2023
Siyuan Wang, Zhongyu Wei, Jiarong Xu, Zhihao Fan

Figure 1 for Unifying Structure Reasoning and Language Model Pre-training for Complex Reasoning
Figure 2 for Unifying Structure Reasoning and Language Model Pre-training for Complex Reasoning
Figure 3 for Unifying Structure Reasoning and Language Model Pre-training for Complex Reasoning
Figure 4 for Unifying Structure Reasoning and Language Model Pre-training for Complex Reasoning

Recent knowledge enhanced pre-trained language models have shown remarkable performance on downstream tasks by incorporating structured knowledge from external sources into language models. However, they usually suffer from a heterogeneous information alignment problem and a noisy knowledge injection problem. For complex reasoning, the contexts contain rich knowledge that typically exists in complex and sparse forms. In order to model structured knowledge in the context and avoid these two problems, we propose to unify structure reasoning and language model pre-training. It identifies four types of elementary knowledge structures from contexts to construct structured queries, and utilizes the box embedding method to conduct explicit structure reasoning along queries during language modeling. To fuse textual and structured semantics, we utilize contextual language representations of knowledge structures to initialize their box embeddings for structure reasoning. We conduct experiments on complex language reasoning and knowledge graph (KG) reasoning tasks. The results show that our model can effectively enhance the performance of complex reasoning of both language and KG modalities.

* 10 pages, 4 figures, 6 tables 
Viaarxiv icon

Understanding Translationese in Cross-Lingual Summarization

Dec 14, 2022
Jiaan Wang, Fandong Meng, Tingyi Zhang, Yunlong Liang, Jiarong Xu, Zhixu Li, Jie Zhou

Figure 1 for Understanding Translationese in Cross-Lingual Summarization
Figure 2 for Understanding Translationese in Cross-Lingual Summarization
Figure 3 for Understanding Translationese in Cross-Lingual Summarization
Figure 4 for Understanding Translationese in Cross-Lingual Summarization

Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS samples, existing datasets typically involve translation in their creation. However, the translated text is distinguished from the text originally written in that language, i.e., translationese. Though many efforts have been devoted to CLS, none of them notice the phenomenon of translationese. In this paper, we first confirm that the different approaches to constructing CLS datasets will lead to different degrees of translationese. Then we design systematic experiments to investigate how translationese affects CLS model evaluation and performance when it appears in source documents or target summaries. In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in the real scene; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies. Furthermore, we give suggestions for future CLS research including dataset and model developments. We hope that our work could let researchers notice the phenomenon of translationese in CLS and take it into account in the future.

* Work in progress 
Viaarxiv icon

DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection

Jul 12, 2022
Xuanwen Huang, Yang Yang, Yang Wang, Chunping Wang, Zhisheng Zhang, Jiarong Xu, Lei Chen

Figure 1 for DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection
Figure 2 for DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection
Figure 3 for DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection
Figure 4 for DGraph: A Large-Scale Financial Dataset for Graph Anomaly Detection

Graph Anomaly Detection (GAD) has recently become a hot research spot due to its practicability and theoretical value. Since GAD emphasizes the application and the rarity of anomalous samples, enriching the varieties of its datasets is a fundamental work. Thus, this paper present DGraph, a real-world dynamic graph in the finance domain. DGraph overcomes many limitations of current GAD datasets. It contains about 3M nodes, 4M dynamic edges, and 1M ground-truth nodes. We provide a comprehensive observation of DGraph, revealing that anomalous nodes and normal nodes generally have different structures, neighbor distribution, and temporal dynamics. Moreover, it suggests that those unlabeled nodes are also essential for detecting fraudsters. Furthermore, we conduct extensive experiments on DGraph. Observation and experiments demonstrate that DGraph is propulsive to advance GAD research and enable in-depth exploration of anomalous nodes.

* 9 pages 
Viaarxiv icon

A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training

Jun 11, 2022
Zhihao Fan, Zhongyu Wei, Jingjing Chen, Siyuan Wang, Zejun Li, Jiarong Xu, Xuanjing Huang

Figure 1 for A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training
Figure 2 for A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training
Figure 3 for A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training
Figure 4 for A Unified Continuous Learning Framework for Multi-modal Knowledge Discovery and Pre-training

Multi-modal pre-training and knowledge discovery are two important research topics in multi-modal machine learning. Nevertheless, none of existing works make attempts to link knowledge discovery with knowledge guided multi-modal pre-training. In this paper, we propose to unify them into a continuous learning framework for mutual improvement. Taking the open-domain uni-modal datasets of images and texts as input, we maintain a knowledge graph as the foundation to support these two tasks. For knowledge discovery, a pre-trained model is used to identify cross-modal links on the graph. For model pre-training, the knowledge graph is used as the external knowledge to guide the model updating. These two steps are iteratively performed in our framework for continuous learning. The experimental results on MS-COCO and Flickr30K with respect to both knowledge discovery and the pre-trained model validate the effectiveness of our framework.

Viaarxiv icon