Abstract:The large-scale deployment of Solidity smart contracts on the Ethereum mainnet has increasingly attracted financially-motivated attackers in recent years. A few now-infamous attacks in Ethereum's history includes DAO attack in 2016 (50 million dollars lost), Parity Wallet hack in 2017 (146 million dollars locked), Beautychain's token BEC in 2018 (900 million dollars market value fell to 0), and NFT gaming blockchain breach in 2022 ($600 million in Ether stolen). This paper presents a comprehensive investigation of the use of large language models (LLMs) and their capabilities in detecting OWASP Top Ten vulnerabilities in Solidity. We introduce a novel, class-balanced, structured, and labeled dataset named VulSmart, which we use to benchmark and compare the performance of open-source LLMs such as CodeLlama, Llama2, CodeT5 and Falcon, alongside closed-source models like GPT-3.5 Turbo and GPT-4o Mini. Our proposed SmartVD framework is rigorously tested against these models through extensive automated and manual evaluations, utilizing BLEU and ROUGE metrics to assess the effectiveness of vulnerability detection in smart contracts. We also explore three distinct prompting strategies-zero-shot, few-shot, and chain-of-thought-to evaluate the multi-class classification and generative capabilities of the SmartVD framework. Our findings reveal that SmartVD outperforms its open-source counterparts and even exceeds the performance of closed-source base models like GPT-3.5 and GPT-4 Mini. After fine-tuning, the closed-source models, GPT-3.5 Turbo and GPT-4o Mini, achieved remarkable performance with 99% accuracy in detecting vulnerabilities, 94% in identifying their types, and 98% in determining severity. Notably, SmartVD performs best with the `chain-of-thought' prompting technique, whereas the fine-tuned closed-source models excel with the `zero-shot' prompting approach.
Abstract:Traditional Collaborative Filtering (CF) based methods are applied to understand the personal preferences of users/customers for items or products from the rating matrix. Usually, the rating matrix is sparse in nature. So there are some improved variants of the CF method that apply the increasing amount of side information to handle the sparsity problem. Only linear kernel or only non-linear kernel is applied in most of the available recommendation-related work to understand user-item latent feature embeddings from data. Only linear kernel or only non-linear kernel is not sufficient to learn complex user-item features from side information of users. Recently, some researchers have focused on hybrid models that learn some features with non-linear kernels and some other features with linear kernels. But it is very difficult to understand which features can be learned accurately with linear kernels or with non-linear kernels. To overcome this problem, we propose a novel deep fusion model named FusionDeepMF and the novel attempts of this model are i) learning user-item rating matrix and side information through linear and non-linear kernel simultaneously, ii) application of a tuning parameter determining the trade-off between the dual embeddings that are generated from linear and non-linear kernels. Extensive experiments on online review datasets establish that FusionDeepMF can be remarkably futuristic compared to other baseline approaches. Empirical evidence also shows that FusionDeepMF achieves better performances compared to the linear kernels of Matrix Factorization (MF) and the non-linear kernels of Multi-layer Perceptron (MLP).
Abstract:Recommender systems recommend items more accurately by analyzing users' potential interest on different brands' items. In conjunction with users' rating similarity, the presence of users' implicit feedbacks like clicking items, viewing items specifications, watching videos etc. have been proved to be helpful for learning users' embedding, that helps better rating prediction of users. Most existing recommender systems focus on modeling of ratings and implicit feedbacks ignoring users' explicit feedbacks. Explicit feedbacks can be used to validate the reliability of the particular users and can be used to learn about the users' characteristic. Users' characteristic mean what type of reviewers they are. In this paper, we explore three different models for recommendation with more accuracy focusing on users' explicit feedbacks and implicit feedbacks. First one is RHC-PMF that predicts users' rating more accurately based on user's three explicit feedbacks (rating, helpfulness score and centrality) and second one is RV-PMF, where user's implicit feedback (view relationship) is considered. Last one is RHCV-PMF, where both type of feedbacks are considered. In this model users' explicit feedbacks' similarity indicate the similarity of their reliability and characteristic and implicit feedback's similarity indicates their preference similarity. Extensive experiments on real world dataset, i.e. Amazon.com online review dataset shows that our models perform better compare to base-line models in term of users' rating prediction. RHCV-PMF model also performs better rating prediction compare to baseline models for cold start users and cold start items.