Alert button
Picture for Peilin Zhou

Peilin Zhou

Alert button

LLMRec: Benchmarking Large Language Models on Recommendation Task

Aug 23, 2023
Junling Liu, Chao Liu, Peilin Zhou, Qichen Ye, Dading Chong, Kang Zhou, Yueqi Xie, Yuwei Cao, Shoujin Wang, Chenyu You, Philip S. Yu

Figure 1 for LLMRec: Benchmarking Large Language Models on Recommendation Task
Figure 2 for LLMRec: Benchmarking Large Language Models on Recommendation Task
Figure 3 for LLMRec: Benchmarking Large Language Models on Recommendation Task
Figure 4 for LLMRec: Benchmarking Large Language Models on Recommendation Task

Recently, the fast development of Large Language Models (LLMs) such as ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. However, the application of LLMs in the recommendation domain has not been thoroughly investigated. To bridge this gap, we propose LLMRec, a LLM-based recommender system designed for benchmarking LLMs on various recommendation tasks. Specifically, we benchmark several popular off-the-shelf LLMs, such as ChatGPT, LLaMA, ChatGLM, on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Furthermore, we investigate the effectiveness of supervised finetuning to improve LLMs' instruction compliance ability. The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation. However, they demonstrated comparable performance to state-of-the-art methods in explainability-based tasks. We also conduct qualitative evaluations to further evaluate the quality of contents generated by different models, and the results show that LLMs can truly understand the provided information and generate clearer and more reasonable results. We aspire that this benchmark will serve as an inspiration for researchers to delve deeper into the potential of LLMs in enhancing recommendation performance. Our codes, processed data and benchmark results are available at https://github.com/williamliujl/LLMRec.

Viaarxiv icon

Attention Calibration for Transformer-based Sequential Recommendation

Aug 18, 2023
Peilin Zhou, Qichen Ye, Yueqi Xie, Jingqi Gao, Shoujin Wang, Jae Boum Kim, Chenyu You, Sunghun Kim

Figure 1 for Attention Calibration for Transformer-based Sequential Recommendation
Figure 2 for Attention Calibration for Transformer-based Sequential Recommendation
Figure 3 for Attention Calibration for Transformer-based Sequential Recommendation
Figure 4 for Attention Calibration for Transformer-based Sequential Recommendation

Transformer-based sequential recommendation (SR) has been booming in recent years, with the self-attention mechanism as its key component. Self-attention has been widely believed to be able to effectively select those informative and relevant items from a sequence of interacted items for next-item prediction via learning larger attention weights for these items. However, this may not always be true in reality. Our empirical analysis of some representative Transformer-based SR models reveals that it is not uncommon for large attention weights to be assigned to less relevant items, which can result in inaccurate recommendations. Through further in-depth analysis, we find two factors that may contribute to such inaccurate assignment of attention weights: sub-optimal position encoding and noisy input. To this end, in this paper, we aim to address this significant yet challenging gap in existing works. To be specific, we propose a simple yet effective framework called Attention Calibration for Transformer-based Sequential Recommendation (AC-TSR). In AC-TSR, a novel spatial calibrator and adversarial calibrator are designed respectively to directly calibrates those incorrectly assigned attention weights. The former is devised to explicitly capture the spatial relationships (i.e., order and distance) among items for more precise calculation of attention weights. The latter aims to redistribute the attention weights based on each item's contribution to the next-item prediction. AC-TSR is readily adaptable and can be seamlessly integrated into various existing transformer-based SR models. Extensive experimental results on four benchmark real-world datasets demonstrate the superiority of our proposed ACTSR via significant recommendation performance enhancements. The source code is available at https://github.com/AIM-SE/AC-TSR.

* Accepted by CIKM2023 
Viaarxiv icon

Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning

Jun 28, 2023
Yining Hua, Shixu Lin, Minghui Li, Yujie Zhang, Peilin Zhou, Ying-Chih Lo, Li Zhou, Jie Yang

Figure 1 for Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning
Figure 2 for Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning
Figure 3 for Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning
Figure 4 for Streamlining Social Media Information Retrieval for Public Health Research with Deep Learning

The utilization of social media in epidemic surveillance has been well established. Nonetheless, bias is often introduced when pre-defined lexicons are used to retrieve relevant corpus. This study introduces a framework aimed at curating extensive dictionaries of medical colloquialisms and Unified Medical Language System (UMLS) concepts. The framework comprises three modules: a BERT-based Named Entity Recognition (NER) model that identifies medical entities from social media content, a deep-learning powered normalization module that standardizes the extracted entities, and a semi-supervised clustering module that assigns the most probable UMLS concept to each standardized entity. We applied this framework to COVID-19-related tweets from February 1, 2020, to April 30, 2022, generating a symptom dictionary (available at https://github.com/ningkko/UMLS_colloquialism/) composed of 9,249 standardized entities mapped to 876 UMLS concepts and 38,175 colloquial expressions. This framework demonstrates encouraging potential in addressing the constraints of keyword matching information retrieval in social media-based public health research.

* Accepted to ICHI 2023 (The 11th IEEE International Conference on Healthcare Informatics) as a poster presentation 
Viaarxiv icon

Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset

Jun 08, 2023
Junling Liu, Peilin Zhou, Yining Hua, Dading Chong, Zhongyu Tian, Andrew Liu, Helin Wang, Chenyu You, Zhenhua Guo, Lei Zhu, Michael Lingzhi Li

Figure 1 for Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset
Figure 2 for Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset
Figure 3 for Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset
Figure 4 for Benchmarking Large Language Models on CMExam -- A Comprehensive Chinese Medical Exam Dataset

Recent advancements in large language models (LLMs) have transformed the field of question answering (QA). However, evaluating LLMs in the medical field is challenging due to the lack of standardized and comprehensive datasets. To address this gap, we introduce CMExam, sourced from the Chinese National Medical Licensing Examination. CMExam consists of 60K+ multiple-choice questions for standardized and objective evaluations, as well as solution explanations for model reasoning evaluation in an open-ended manner. For in-depth analyses of LLMs, we invited medical professionals to label five additional question-wise annotations, including disease groups, clinical departments, medical disciplines, areas of competency, and question difficulty levels. Alongside the dataset, we further conducted thorough experiments with representative LLMs and QA algorithms on CMExam. The results show that GPT-4 had the best accuracy of 61.6% and a weighted F1 score of 0.617. These results highlight a great disparity when compared to human accuracy, which stood at 71.6%. For explanation tasks, while LLMs could generate relevant reasoning and demonstrate improved performance after finetuning, they fall short of a desired standard, indicating ample room for improvement. To the best of our knowledge, CMExam is the first Chinese medical exam dataset to provide comprehensive medical annotations. The experiments and findings of LLM evaluation also provide valuable insights into the challenges and potential solutions in developing Chinese medical QA systems and LLM evaluation pipelines. The dataset and relevant code are available at https://github.com/williamliujl/CMExam.

Viaarxiv icon

Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems

Feb 28, 2023
Yueqi Xie, Jingqi Gao, Peilin Zhou, Qichen Ye, Yining Hua, Jaeboum Kim, Fangzhao Wu, Sunghun Kim

Figure 1 for Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems
Figure 2 for Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems
Figure 3 for Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems
Figure 4 for Rethinking Multi-Interest Learning for Candidate Matching in Recommender Systems

Existing research efforts for multi-interest candidate matching in recommender systems mainly focus on improving model architecture or incorporating additional information, neglecting the importance of training schemes. This work revisits the training framework and uncovers two major problems hindering the expressiveness of learned multi-interest representations. First, the current training objective (i.e., uniformly sampled softmax) fails to effectively train discriminative representations in a multi-interest learning scenario due to the severe increase in easy negative samples. Second, a routing collapse problem is observed where each learned interest may collapse to express information only from a single item, resulting in information loss. To address these issues, we propose the REMI framework, consisting of an Interest-aware Hard Negative mining strategy (IHN) and a Routing Regularization (RR) method. IHN emphasizes interest-aware hard negatives by proposing an ideal sampling distribution and developing a Monte-Carlo strategy for efficient approximation. RR prevents routing collapse by introducing a novel regularization term on the item-to-interest routing matrices. These two components enhance the learned multi-interest representations from both the optimization objective and the composition information. REMI is a general framework that can be readily applied to various existing multi-interest candidate matching methods. Experiments on three real-world datasets show our method can significantly improve state-of-the-art methods with easy implementation and negligible computational overhead. The source code will be released.

Viaarxiv icon

Equivariant Contrastive Learning for Sequential Recommendation

Nov 18, 2022
Peilin Zhou, Jingqi Gao, Yueqi Xie, Qichen Ye, Yining Hua, Sunghun Kim

Figure 1 for Equivariant Contrastive Learning for Sequential Recommendation
Figure 2 for Equivariant Contrastive Learning for Sequential Recommendation
Figure 3 for Equivariant Contrastive Learning for Sequential Recommendation
Figure 4 for Equivariant Contrastive Learning for Sequential Recommendation

Contrastive learning (CL) benefits the training of sequential recommendation models with informative self-supervision signals. Existing solutions apply general sequential data augmentation strategies to generate positive pairs and encourage their representations to be invariant. However, due to the inherent properties of user behavior sequences, some augmentation strategies, such as item substitution, can lead to changes in user intent. Learning indiscriminately invariant representations for all augmentation strategies might be sub-optimal. Therefore, we propose Equivariant Contrastive Learning for Sequential Recommendation (ECL-SR), which endows SR models with great discriminative power, making the learned user behavior representations sensitive to invasive augmentations (e.g., item substitution) and insensitive to mild augmentations (e.g., feature-level dropout masking). In detail, we use the conditional discriminator to capture differences in behavior due to item substitution, which encourages the user behavior encoder to be equivariant to invasive augmentations. Comprehensive experiments on four benchmark datasets show that the proposed ECL-SR framework achieves competitive performance compared to state-of-the-art SR models. The source code will be released.

* 12 pages, 6 figures 
Viaarxiv icon

GreenPLM: Cross-lingual pre-trained language models conversion with (almost) no cost

Nov 13, 2022
Qingcheng Zeng, Lucas Garay, Peilin Zhou, Dading Chong, Yining Hua, Jiageng Wu, Yikang Pan, Han Zhou, Jie Yang

Figure 1 for GreenPLM: Cross-lingual pre-trained language models conversion with (almost) no cost
Figure 2 for GreenPLM: Cross-lingual pre-trained language models conversion with (almost) no cost
Figure 3 for GreenPLM: Cross-lingual pre-trained language models conversion with (almost) no cost
Figure 4 for GreenPLM: Cross-lingual pre-trained language models conversion with (almost) no cost

While large pre-trained models have transformed the field of natural language processing (NLP), the high training cost and low cross-lingual availability of such models prevent the new advances from being equally shared by users across all languages, especially the less spoken ones. To promote equal opportunities for all language speakers in NLP research and to reduce energy consumption for sustainability, this study proposes an effective and energy-efficient framework GreenPLM that uses bilingual lexicons to directly translate language models of one language into other languages at (almost) no additional cost. We validate this approach in 18 languages and show that this framework is comparable to, if not better than, other heuristics trained with high cost. In addition, when given a low computational cost (2.5%), the framework outperforms the original monolingual language models in six out of seven tested languages. This approach can be easily implemented, and we will release language models in 50 languages translated from English soon.

Viaarxiv icon

METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets

Sep 28, 2022
Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang

Figure 1 for METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
Figure 2 for METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
Figure 3 for METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
Figure 4 for METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets

The COVID-19 pandemic continues to bring up various topics discussed or debated on social media. In order to explore the impact of pandemics on people's lives, it is crucial to understand the public's concerns and attitudes towards pandemic-related entities (e.g., drugs, vaccines) on social media. However, models trained on existing named entity recognition (NER) or targeted sentiment analysis (TSA) datasets have limited ability to understand COVID-19-related social media texts because these datasets are not designed or annotated from a medical perspective. This paper releases METS-CoV, a dataset containing medical entities and targeted sentiments from COVID-19-related tweets. METS-CoV contains 10,000 tweets with 7 types of entities, including 4 medical entity types (Disease, Drug, Symptom, and Vaccine) and 3 general entity types (Person, Location, and Organization). To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9,101 entities (in 5,278 tweets). To the best of our knowledge, METS-CoV is the first dataset to collect medical entities and corresponding sentiments of COVID-19-related tweets. We benchmark the performance of classical machine learning models and state-of-the-art deep learning models on NER and TSA tasks with extensive experiments. Results show that the dataset has vast room for improvement for both NER and TSA tasks. METS-CoV is an important resource for developing better medical social media tools and facilitating computational social science research, especially in epidemiology. Our data, annotation guidelines, benchmark models, and source code are publicly available (https://github.com/YLab-Open/METS-CoV) to ensure reproducibility.

* 10 pages, 6 figures, 6 tables, accepted by NeurIPS 2022 Datasets and Benchmarks track 
Viaarxiv icon

Low-resource Accent Classification in Geographically-proximate Settings: A Forensic and Sociophonetics Perspective

Jun 29, 2022
Qingcheng Zeng, Dading Chong, Peilin Zhou, Jie Yang

Figure 1 for Low-resource Accent Classification in Geographically-proximate Settings: A Forensic and Sociophonetics Perspective
Figure 2 for Low-resource Accent Classification in Geographically-proximate Settings: A Forensic and Sociophonetics Perspective
Figure 3 for Low-resource Accent Classification in Geographically-proximate Settings: A Forensic and Sociophonetics Perspective
Figure 4 for Low-resource Accent Classification in Geographically-proximate Settings: A Forensic and Sociophonetics Perspective

Accented speech recognition and accent classification are relatively under-explored research areas in speech technology. Recently, deep learning-based methods and Transformer-based pretrained models have achieved superb performances in both areas. However, most accent classification tasks focused on classifying different kinds of English accents and little attention was paid to geographically-proximate accent classification, especially under a low-resource setting where forensic speech science tasks usually encounter. In this paper, we explored three main accent modelling methods combined with two different classifiers based on 105 speaker recordings retrieved from five urban varieties in Northern England. Although speech representations generated from pretrained models generally have better performances in downstream classification, traditional methods like Mel Frequency Cepstral Coefficients (MFCCs) and formant measurements are equipped with specific strengths. These results suggest that in forensic phonetics scenario where data are relatively scarce, a simple modelling method and classifier could be competitive with state-of-the-art pretrained speech models as feature extractors, which could enhance a sooner estimation for the accent information in practices. Besides, our findings also cross-validated a new methodology in quantifying sociophonetic changes.

* INTERSPEECH 2022 
Viaarxiv icon