Alert button
Picture for Zhaoqing Li

Zhaoqing Li

Alert button

A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications

Sep 19, 2023
Yi Zhang, Yuying Zhao, Zhaoqing Li, Xueqi Cheng, Yu Wang, Olivera Kotevska, Philip S. Yu, Tyler Derr

Figure 1 for A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Figure 2 for A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Figure 3 for A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications
Figure 4 for A Survey on Privacy in Graph Neural Networks: Attacks, Preservation, and Applications

Graph Neural Networks (GNNs) have gained significant attention owing to their ability to handle graph-structured data and the improvement in practical applications. However, many of these models prioritize high utility performance, such as accuracy, with a lack of privacy consideration, which is a major concern in modern society where privacy attacks are rampant. To address this issue, researchers have started to develop privacy-preserving GNNs. Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain. In this survey, we aim to address this gap by summarizing the attacks on graph data according to the targeted information, categorizing the privacy preservation techniques in GNNs, and reviewing the datasets and applications that could be used for analyzing/solving privacy issues in GNNs. We also outline potential directions for future research in order to build better privacy-preserving GNNs.

Viaarxiv icon

Hate Speech Detection via Dual Contrastive Learning

Jul 10, 2023
Junyu Lu, Hongfei Lin, Xiaokun Zhang, Zhaoqing Li, Tongyue Zhang, Linlin Zong, Fenglong Ma, Bo Xu

Figure 1 for Hate Speech Detection via Dual Contrastive Learning
Figure 2 for Hate Speech Detection via Dual Contrastive Learning
Figure 3 for Hate Speech Detection via Dual Contrastive Learning
Figure 4 for Hate Speech Detection via Dual Contrastive Learning

The fast spread of hate speech on social media impacts the Internet environment and our society by increasing prejudice and hurting people. Detecting hate speech has aroused broad attention in the field of natural language processing. Although hate speech detection has been addressed in recent work, this task still faces two inherent unsolved challenges. The first challenge lies in the complex semantic information conveyed in hate speech, particularly the interference of insulting words in hate speech detection. The second challenge is the imbalanced distribution of hate speech and non-hate speech, which may significantly deteriorate the performance of models. To tackle these challenges, we propose a novel dual contrastive learning (DCL) framework for hate speech detection. Our framework jointly optimizes the self-supervised and the supervised contrastive learning loss for capturing span-level information beyond the token-level emotional semantics used in existing models, particularly detecting speech containing abusive and insulting words. Moreover, we integrate the focal loss into the dual contrastive learning framework to alleviate the problem of data imbalance. We conduct experiments on two publicly available English datasets, and experimental results show that the proposed model outperforms the state-of-the-art models and precisely detects hate speeches.

* The paper has been accepted in IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 
Viaarxiv icon