Alert button
Picture for Fenglong Ma

Fenglong Ma

Alert button

Zero-Resource Hallucination Prevention for Large Language Models

Sep 12, 2023
Junyu Luo, Cao Xiao, Fenglong Ma

Figure 1 for Zero-Resource Hallucination Prevention for Large Language Models
Figure 2 for Zero-Resource Hallucination Prevention for Large Language Models
Figure 3 for Zero-Resource Hallucination Prevention for Large Language Models
Figure 4 for Zero-Resource Hallucination Prevention for Large Language Models

The prevalent use of large language models (LLMs) in various domains has drawn attention to the issue of "hallucination," which refers to instances where LLMs generate factually inaccurate or ungrounded information. Existing techniques for hallucination detection in language assistants rely on intricate fuzzy, specific free-language-based chain of thought (CoT) techniques or parameter-based methods that suffer from interpretability issues. Additionally, the methods that identify hallucinations post-generation could not prevent their occurrence and suffer from inconsistent performance due to the influence of the instruction format and model style. In this paper, we introduce a novel pre-detection self-evaluation technique, referred to as SELF-FAMILIARITY, which focuses on evaluating the model's familiarity with the concepts present in the input instruction and withholding the generation of response in case of unfamiliar concepts. This approach emulates the human ability to refrain from responding to unfamiliar topics, thus reducing hallucinations. We validate SELF-FAMILIARITY across four different large language models, demonstrating consistently superior performance compared to existing techniques. Our findings propose a significant shift towards preemptive strategies for hallucination mitigation in LLM assistants, promising improvements in reliability, applicability, and interpretability.

Viaarxiv icon

Towards Personalized Federated Learning via Heterogeneous Model Reassembly

Sep 06, 2023
Jiaqi Wang, Xingyi Yang, Suhan Cui, Liwei Che, Lingjuan Lyu, Dongkuan Xu, Fenglong Ma

Figure 1 for Towards Personalized Federated Learning via Heterogeneous Model Reassembly
Figure 2 for Towards Personalized Federated Learning via Heterogeneous Model Reassembly
Figure 3 for Towards Personalized Federated Learning via Heterogeneous Model Reassembly
Figure 4 for Towards Personalized Federated Learning via Heterogeneous Model Reassembly

This paper focuses on addressing the practical yet challenging problem of model heterogeneity in federated learning, where clients possess models with different network structures. To track this problem, we propose a novel framework called pFedHR, which leverages heterogeneous model reassembly to achieve personalized federated learning. In particular, we approach the problem of heterogeneous model personalization as a model-matching optimization task on the server side. Moreover, pFedHR automatically and dynamically generates informative and diverse personalized candidates with minimal human intervention. Furthermore, our proposed heterogeneous model reassembly technique mitigates the adverse impact introduced by using public data with different distributions from the client data to a certain extent. Experimental results demonstrate that pFedHR outperforms baselines on three datasets under both IID and Non-IID settings. Additionally, pFedHR effectively reduces the adverse impact of using different public data and dynamically generates diverse personalized models in an automated manner.

Viaarxiv icon

Hate Speech Detection via Dual Contrastive Learning

Jul 10, 2023
Junyu Lu, Hongfei Lin, Xiaokun Zhang, Zhaoqing Li, Tongyue Zhang, Linlin Zong, Fenglong Ma, Bo Xu

Figure 1 for Hate Speech Detection via Dual Contrastive Learning
Figure 2 for Hate Speech Detection via Dual Contrastive Learning
Figure 3 for Hate Speech Detection via Dual Contrastive Learning
Figure 4 for Hate Speech Detection via Dual Contrastive Learning

The fast spread of hate speech on social media impacts the Internet environment and our society by increasing prejudice and hurting people. Detecting hate speech has aroused broad attention in the field of natural language processing. Although hate speech detection has been addressed in recent work, this task still faces two inherent unsolved challenges. The first challenge lies in the complex semantic information conveyed in hate speech, particularly the interference of insulting words in hate speech detection. The second challenge is the imbalanced distribution of hate speech and non-hate speech, which may significantly deteriorate the performance of models. To tackle these challenges, we propose a novel dual contrastive learning (DCL) framework for hate speech detection. Our framework jointly optimizes the self-supervised and the supervised contrastive learning loss for capturing span-level information beyond the token-level emotional semantics used in existing models, particularly detecting speech containing abusive and insulting words. Moreover, we integrate the focal loss into the dual contrastive learning framework to alleviate the problem of data imbalance. We conduct experiments on two publicly available English datasets, and experimental results show that the proposed model outperforms the state-of-the-art models and precisely detects hate speeches.

* The paper has been accepted in IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP) 
Viaarxiv icon

On the Security Risks of Knowledge Graph Reasoning

May 03, 2023
Zhaohan Xi, Tianyu Du, Changjiang Li, Ren Pang, Shouling Ji, Xiapu Luo, Xusheng Xiao, Fenglong Ma, Ting Wang

Figure 1 for On the Security Risks of Knowledge Graph Reasoning
Figure 2 for On the Security Risks of Knowledge Graph Reasoning
Figure 3 for On the Security Risks of Knowledge Graph Reasoning
Figure 4 for On the Security Risks of Knowledge Graph Reasoning

Knowledge graph reasoning (KGR) -- answering complex logical queries over large knowledge graphs -- represents an important artificial intelligence task, entailing a range of applications (e.g., cyber threat hunting). However, despite its surging popularity, the potential security risks of KGR are largely unexplored, which is concerning, given the increasing use of such capability in security-critical domains. This work represents a solid initial step towards bridging the striking gap. We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors. Further, we present ROAR, a new class of attacks that instantiate a variety of such threats. Through empirical evaluation in representative use cases (e.g., medical decision support, cyber threat hunting, and commonsense reasoning), we demonstrate that ROAR is highly effective to mislead KGR to suggest pre-defined answers for target queries, yet with negligible impact on non-target ones. Finally, we explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries, which leads to several promising research directions.

* In proceedings of USENIX Security'23. Codes: https://github.com/HarrialX/security-risk-KG-reasoning 
Viaarxiv icon

Boosting Few-Shot Text Classification via Distribution Estimation

Mar 26, 2023
Han Liu, Feng Zhang, Xiaotong Zhang, Siyang Zhao, Fenglong Ma, Xiao-Ming Wu, Hongyang Chen, Hong Yu, Xianchao Zhang

Figure 1 for Boosting Few-Shot Text Classification via Distribution Estimation
Figure 2 for Boosting Few-Shot Text Classification via Distribution Estimation
Figure 3 for Boosting Few-Shot Text Classification via Distribution Estimation
Figure 4 for Boosting Few-Shot Text Classification via Distribution Estimation

Distribution estimation has been demonstrated as one of the most effective approaches in dealing with few-shot image classification, as the low-level patterns and underlying representations can be easily transferred across different tasks in computer vision domain. However, directly applying this approach to few-shot text classification is challenging, since leveraging the statistics of known classes with sufficient samples to calibrate the distributions of novel classes may cause negative effects due to serious category difference in text domain. To alleviate this issue, we propose two simple yet effective strategies to estimate the distributions of the novel classes by utilizing unlabeled query samples, thus avoiding the potential negative transfer issue. Specifically, we first assume a class or sample follows the Gaussian distribution, and use the original support set and the nearest few query samples to estimate the corresponding mean and covariance. Then, we augment the labeled samples by sampling from the estimated distribution, which can provide sufficient supervision for training the classification model. Extensive experiments on eight few-shot text classification datasets show that the proposed method outperforms state-of-the-art baselines significantly.

* Accepted to AAAI 2023 
Viaarxiv icon

Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT

Mar 05, 2023
Jiaqi Wang, Shenglai Zeng, Zewei Long, Yaqing Wang, Houping Xiao, Fenglong Ma

Figure 1 for Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT
Figure 2 for Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT
Figure 3 for Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT
Figure 4 for Knowledge-Enhanced Semi-Supervised Federated Learning for Aggregating Heterogeneous Lightweight Clients in IoT

Federated learning (FL) enables multiple clients to train models collaboratively without sharing local data, which has achieved promising results in different areas, including the Internet of Things (IoT). However, end IoT devices do not have abilities to automatically annotate their collected data, which leads to the label shortage issue at the client side. To collaboratively train an FL model, we can only use a small number of labeled data stored on the server. This is a new yet practical scenario in federated learning, i.e., labels-at-server semi-supervised federated learning (SemiFL). Although several SemiFL approaches have been proposed recently, none of them can focus on the personalization issue in their model design. IoT environments make SemiFL more challenging, as we need to take device computational constraints and communication cost into consideration simultaneously. To tackle these new challenges together, we propose a novel SemiFL framework named pFedKnow. pFedKnow generates lightweight personalized client models via neural network pruning techniques to reduce communication cost. Moreover, it incorporates pretrained large models as prior knowledge to guide the aggregation of personalized client models and further enhance the framework performance. Experiment results on both image and text datasets show that the proposed pFedKnow outperforms state-of-the-art baselines as well as reducing considerable communication cost. The source code of the proposed pFedKnow is available at https://github.com/JackqqWang/pfedknow/tree/master.

* This paper is acceptted by SDM-2023. Jiaqi Wang and Shenglai Zeng are of equal contribution 
Viaarxiv icon

AutoML in The Wild: Obstacles, Workarounds, and Expectations

Feb 21, 2023
Yuan Sun, Qiurong Song, Xinning Gui, Fenglong Ma, Ting Wang

Figure 1 for AutoML in The Wild: Obstacles, Workarounds, and Expectations
Figure 2 for AutoML in The Wild: Obstacles, Workarounds, and Expectations
Figure 3 for AutoML in The Wild: Obstacles, Workarounds, and Expectations

Automated machine learning (AutoML) is envisioned to make ML techniques accessible to ordinary users. Recent work has investigated the role of humans in enhancing AutoML functionality throughout a standard ML workflow. However, it is also critical to understand how users adopt existing AutoML solutions in complex, real-world settings from a holistic perspective. To fill this gap, this study conducted semi-structured interviews of AutoML users (N = 19) focusing on understanding (1) the limitations of AutoML encountered by users in their real-world practices, (2) the strategies users adopt to cope with such limitations, and (3) how the limitations and workarounds impact their use of AutoML. Our findings reveal that users actively exercise user agency to overcome three major challenges arising from customizability, transparency, and privacy. Furthermore, users make cautious decisions about whether and how to apply AutoML on a case-by-case basis. Finally, we derive design implications for developing future AutoML solutions.

* In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, April 23-28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 15 Pages 
Viaarxiv icon

Forecasting User Interests Through Topic Tag Predictions in Online Health Communities

Nov 05, 2022
Amogh Subbakrishna Adishesha, Lily Jakielaszek, Fariha Azhar, Peixuan Zhang, Vasant Honavar, Fenglong Ma, Chandra Belani, Prasenjit Mitra, Sharon Xiaolei Huang

Figure 1 for Forecasting User Interests Through Topic Tag Predictions in Online Health Communities
Figure 2 for Forecasting User Interests Through Topic Tag Predictions in Online Health Communities
Figure 3 for Forecasting User Interests Through Topic Tag Predictions in Online Health Communities
Figure 4 for Forecasting User Interests Through Topic Tag Predictions in Online Health Communities

The increasing reliance on online communities for healthcare information by patients and caregivers has led to the increase in the spread of misinformation, or subjective, anecdotal and inaccurate or non-specific recommendations, which, if acted on, could cause serious harm to the patients. Hence, there is an urgent need to connect users with accurate and tailored health information in a timely manner to prevent such harm. This paper proposes an innovative approach to suggesting reliable information to participants in online communities as they move through different stages in their disease or treatment. We hypothesize that patients with similar histories of disease progression or course of treatment would have similar information needs at comparable stages. Specifically, we pose the problem of predicting topic tags or keywords that describe the future information needs of users based on their profiles, traces of their online interactions within the community (past posts, replies) and the profiles and traces of online interactions of other users with similar profiles and similar traces of past interaction with the target users. The result is a variant of the collaborative information filtering or recommendation system tailored to the needs of users of online health communities. We report results of our experiments on an expert curated data set which demonstrate the superiority of the proposed approach over the state of the art baselines with respect to accurate and timely prediction of topic tags (and hence information sources of interest).

* Healthcare Informatics and NLP 
Viaarxiv icon