Alert button
Picture for Zhenxi Lin

Zhenxi Lin

Alert button

Emerging Drug Interaction Prediction Enabled by Flow-based Graph Neural Network with Biomedical Network

Nov 15, 2023
Yongqi Zhang, Quanming Yao, Ling Yue, Xian Wu, Ziheng Zhang, Zhenxi Lin, Yefeng Zheng

Accurately predicting drug-drug interactions (DDI) for emerging drugs, which offer possibilities for treating and alleviating diseases, with computational methods can improve patient care and contribute to efficient drug development. However, many existing computational methods require large amounts of known DDI information, which is scarce for emerging drugs. In this paper, we propose EmerGNN, a graph neural network (GNN) that can effectively predict interactions for emerging drugs by leveraging the rich information in biomedical networks. EmerGNN learns pairwise representations of drugs by extracting the paths between drug pairs, propagating information from one drug to the other, and incorporating the relevant biomedical concepts on the paths. The different edges on the biomedical network are weighted to indicate the relevance for the target DDI prediction. Overall, EmerGNN has higher accuracy than existing approaches in predicting interactions for emerging drugs and can identify the most relevant information on the biomedical network.

* Accepted by Nature Computational Science 
Viaarxiv icon

Relation-aware Ensemble Learning for Knowledge Graph Embedding

Oct 13, 2023
Ling Yue, Yongqi Zhang, Quanming Yao, Yong Li, Xian Wu, Ziheng Zhang, Zhenxi Lin, Yefeng Zheng

Figure 1 for Relation-aware Ensemble Learning for Knowledge Graph Embedding
Figure 2 for Relation-aware Ensemble Learning for Knowledge Graph Embedding
Figure 3 for Relation-aware Ensemble Learning for Knowledge Graph Embedding
Figure 4 for Relation-aware Ensemble Learning for Knowledge Graph Embedding

Knowledge graph (KG) embedding is a fundamental task in natural language processing, and various methods have been proposed to explore semantic patterns in distinctive ways. In this paper, we propose to learn an ensemble by leveraging existing methods in a relation-aware manner. However, exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods. To address this issue, we propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently. This algorithm has the same computation cost as general ensemble methods but with much better performance. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method in efficiently searching relation-aware ensemble weights and achieving state-of-the-art embedding performance. The code is public at https://github.com/LARS-research/RelEns.

* This short paper has been accepted by EMNLP 2023 
Viaarxiv icon

Perturbation-based Self-supervised Attention for Attention Bias in Text Classification

May 25, 2023
Huawen Feng, Zhenxi Lin, Qianli Ma

Figure 1 for Perturbation-based Self-supervised Attention for Attention Bias in Text Classification
Figure 2 for Perturbation-based Self-supervised Attention for Attention Bias in Text Classification
Figure 3 for Perturbation-based Self-supervised Attention for Attention Bias in Text Classification
Figure 4 for Perturbation-based Self-supervised Attention for Attention Bias in Text Classification

In text classification, the traditional attention mechanisms usually focus too much on frequent words, and need extensive labeled data in order to learn. This paper proposes a perturbation-based self-supervised attention approach to guide attention learning without any annotation overhead. Specifically, we add as much noise as possible to all the words in the sentence without changing their semantics and predictions. We hypothesize that words that tolerate more noise are less significant, and we can use this information to refine the attention distribution. Experimental results on three text classification tasks show that our approach can significantly improve the performance of current attention-based models, and is more effective than existing self-supervised methods. We also provide a visualization analysis to verify the effectiveness of our approach.

Viaarxiv icon

Multi-modal Contrastive Representation Learning for Entity Alignment

Sep 02, 2022
Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, Yefeng Zheng

Figure 1 for Multi-modal Contrastive Representation Learning for Entity Alignment
Figure 2 for Multi-modal Contrastive Representation Learning for Entity Alignment
Figure 3 for Multi-modal Contrastive Representation Learning for Entity Alignment
Figure 4 for Multi-modal Contrastive Representation Learning for Entity Alignment

Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities. Most previous works focus on how to utilize and encode information from different modalities, while it is not trivial to leverage multi-modal knowledge in entity alignment because of the modality heterogeneity. In this paper, we propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model, to obtain effective joint representations for multi-modal entity alignment. Different from previous works, MCLEA considers task-oriented modality and models the inter-modal relationships for each entity representation. In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions. Extensive experimental results show that MCLEA outperforms state-of-the-art baselines on public datasets under both supervised and unsupervised settings.

* Accepted by COLING 2022 
Viaarxiv icon

OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding

May 24, 2021
Yuejia Xiang, Ziheng Zhang, Jiaoyan Chen, Xi Chen, Zhenxi Lin, Yefeng Zheng

Figure 1 for OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding
Figure 2 for OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding
Figure 3 for OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding
Figure 4 for OntoEA: Ontology-guided Entity Alignment via Joint Knowledge Graph Embedding

Semantic embedding has been widely investigated for aligning knowledge graph (KG) entities. Current methods have explored and utilized the graph structure, the entity names and attributes, but ignore the ontology (or ontological schema) which contains critical meta information such as classes and their membership relationships with entities. In this paper, we propose an ontology-guided entity alignment method named OntoEA, where both KGs and their ontologies are jointly embedded, and the class hierarchy and the class disjointness are utilized to avoid false mappings. Extensive experiments on seven public and industrial benchmarks have demonstrated the state-of-the-art performance of OntoEA and the effectiveness of the ontologies.

* Accepted by Findings of ACL 2021 
Viaarxiv icon