Alert button
Picture for Suranga Seneviratne

Suranga Seneviratne

Alert button

ExCeL : Combined Extreme and Collective Logit Information for Enhancing Out-of-Distribution Detection

Nov 23, 2023
Naveen Karunanayake, Suranga Seneviratne, Sanjay Chawla

Deep learning models often exhibit overconfidence in predicting out-of-distribution (OOD) data, underscoring the crucial role of OOD detection in ensuring reliability in predictions. Among various OOD detection approaches, post-hoc detectors have gained significant popularity, primarily due to their ease of use and implementation. However, the effectiveness of most post-hoc OOD detectors has been constrained as they rely solely either on extreme information, such as the maximum logit, or on the collective information (i.e., information spanned across classes or training samples) embedded within the output layer. In this paper, we propose ExCeL that combines both extreme and collective information within the output layer for enhanced accuracy in OOD detection. We leverage the logit of the top predicted class as the extreme information (i.e., the maximum logit), while the collective information is derived in a novel approach that involves assessing the likelihood of other classes appearing in subsequent ranks across various training samples. Our idea is motivated by the observation that, for in-distribution (ID) data, the ranking of classes beyond the predicted class is more deterministic compared to that in OOD data. Experiments conducted on CIFAR100 and ImageNet-200 datasets demonstrate that ExCeL consistently is among the five top-performing methods out of twenty-one existing post-hoc baselines when the joint performance on near-OOD and far-OOD is considered (i.e., in terms of AUROC and FPR95). Furthermore, ExCeL shows the best overall performance across both datasets, unlike other baselines that work best on one dataset but has a performance drop in the other.

Viaarxiv icon

Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices

Oct 24, 2022
Oshan Jayawardana, Fariza Rashid, Suranga Seneviratne

Figure 1 for Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices
Figure 2 for Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices
Figure 3 for Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices
Figure 4 for Non-Contrastive Learning-based Behavioural Biometrics for Smart IoT Devices

Behaviour biometrics are being explored as a viable alternative to overcome the limitations of traditional authentication methods such as passwords and static biometrics. Also, they are being considered as a viable authentication method for IoT devices such as smart headsets with AR/VR capabilities, wearables, and erables, that do not have a large form factor or the ability to seamlessly interact with the user. Recent behavioural biometric solutions use deep learning models that require large amounts of annotated training data. Collecting such volumes of behaviour biometrics data raises privacy and usability concerns. To this end, we propose using SimSiam-based non-contrastive self-supervised learning to improve the label efficiency of behavioural biometric systems. The key idea is to use large volumes of unlabelled (and anonymised) data to build good feature extractors that can be subsequently used in supervised settings. Using two EEG datasets, we show that at lower amounts of labelled data, non-contrastive learning performs 4%-11% more than conventional methods such as supervised learning and data augmentation. We also show that, in general, self-supervised learning methods perform better than other baselines. Finally, through careful experimentation, we show various modifications that can be incorporated into the non-contrastive learning process to archive high performance.

* NA 
Viaarxiv icon

Privacy-Preserving Spam Filtering using Functional Encryption

Dec 08, 2020
Sicong Wang, Naveen Karunanayake, Tham Nguyen, Suranga Seneviratne

Figure 1 for Privacy-Preserving Spam Filtering using Functional Encryption
Figure 2 for Privacy-Preserving Spam Filtering using Functional Encryption
Figure 3 for Privacy-Preserving Spam Filtering using Functional Encryption
Figure 4 for Privacy-Preserving Spam Filtering using Functional Encryption

Traditional spam classification requires the end-user to reveal the content of its received email to the spam classifier which violates the privacy. Spam classification over encrypted emails enables the classifier to classify spam email without accessing the email, hence protects the privacy of email content. In this paper, we construct a spam classification framework that enables the classification of encrypted emails. Our classification model is based on a neural network with a quadratic network part and a multi-layer perception network part. The quadratic network architecture is compatible with the operation of an existing quadratic functional encryption scheme that enables our classification to predict the label of encrypted emails without revealing the associated plain-text email. The evaluation results on real-world spam datasets indicate that our proposed spam classification model achieves an accuracy of over 96%.

Viaarxiv icon

A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store

Jun 02, 2020
Naveen Karunanayake, Jathushan Rajasegaran, Ashanie Gunathillake, Suranga Seneviratne, Guillaume Jourjon

Figure 1 for A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store
Figure 2 for A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store
Figure 3 for A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store
Figure 4 for A Multi-modal Neural Embeddings Approach for Detecting Mobile Counterfeit Apps: A Case Study on Google Play Store

Counterfeit apps impersonate existing popular apps in attempts to misguide users to install them for various reasons such as collecting personal information or spreading malware. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation. To this end, this paper proposes to leverage the recent advances in deep learning methods to create image and text embeddings so that counterfeit apps can be efficiently identified when they are submitted for publication. We show that a novel approach of combining content embeddings and style embeddings outperforms the baseline methods for image similarity such as SIFT, SURF, and various image hashing methods. We first evaluate the performance of the proposed method on two well-known datasets for evaluating image similarity methods and show that content, style, and combined embeddings increase precision@k and recall@k by 10%-15% and 12%-25%, respectively when retrieving five nearest neighbours. Second, specifically for the app counterfeit detection problem, combined content and style embeddings achieve 12% and 14% increase in precision@k and recall@k, respectively compared to the baseline methods. Third, we present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-10,000 popular apps. Under a conservative assumption, we were able to find 2,040 potential counterfeits that contain malware in a set of 49,608 apps that showed high similarity to one of the top-10,000 popular apps in Google Play Store. We also find 1,565 potential counterfeits asking for at least five additional dangerous permissions than the original app and 1,407 potential counterfeits having at least five extra third party advertisement libraries.

* arXiv admin note: substantial text overlap with arXiv:1804.09882 
Viaarxiv icon

A Review of Computer Vision Methods in Network Security

May 07, 2020
Jiawei Zhao, Rahat Masood, Suranga Seneviratne

Figure 1 for A Review of Computer Vision Methods in Network Security
Figure 2 for A Review of Computer Vision Methods in Network Security
Figure 3 for A Review of Computer Vision Methods in Network Security
Figure 4 for A Review of Computer Vision Methods in Network Security

Network security has become an area of significant importance more than ever as highlighted by the eye-opening numbers of data breaches, attacks on critical infrastructure, and malware/ransomware/cryptojacker attacks that are reported almost every day. Increasingly, we are relying on networked infrastructure and with the advent of IoT, billions of devices will be connected to the internet, providing attackers with more opportunities to exploit. Traditional machine learning methods have been frequently used in the context of network security. However, such methods are more based on statistical features extracted from sources such as binaries, emails, and packet flows. On the other hand, recent years witnessed a phenomenal growth in computer vision mainly driven by the advances in the area of convolutional neural networks. At a glance, it is not trivial to see how computer vision methods are related to network security. Nonetheless, there is a significant amount of work that highlighted how methods from computer vision can be applied in network security for detecting attacks or building security solutions. In this paper, we provide a comprehensive survey of such work under three topics; i) phishing attempt detection, ii) malware detection, and iii) traffic anomaly detection. Next, we review a set of such commercial products for which public information is available and explore how computer vision methods are effectively used in those products. Finally, we discuss existing research gaps and future research directions, especially focusing on how network security research community and the industry can leverage the exponential growth of computer vision methods to build much secure networked systems.

Viaarxiv icon

TimeCaps: Learning From Time Series Data with Capsule Networks

Jan 11, 2020
Hirunima Jayasekara, Vinoj Jayasundara, Jathushan Rajasegaran, Sandaru Jayasekara, Suranga Seneviratne, Ranga Rodrigo

Figure 1 for TimeCaps: Learning From Time Series Data with Capsule Networks
Figure 2 for TimeCaps: Learning From Time Series Data with Capsule Networks
Figure 3 for TimeCaps: Learning From Time Series Data with Capsule Networks
Figure 4 for TimeCaps: Learning From Time Series Data with Capsule Networks

Capsule networks excel in understanding spatial relationships in 2D data for vision related tasks. Even though they are not designed to capture 1D temporal relationships, with TimeCaps we demonstrate that given the ability, capsule networks excel in understanding temporal relationships. To this end, we generate capsules along the temporal and channel dimensions creating two temporal feature detectors which learn contrasting relationships. TimeCaps surpasses the state-of-the-art results by achieving 96.21% accuracy on identifying 13 Electrocardiogram (ECG) signal beat categories, while achieving on-par results on identifying 30 classes of short audio commands. Further, the instantiation parameters inherently learnt by the capsule networks allow us to completely parameterize 1D signals which opens various possibilities in signal processing.

Viaarxiv icon

Characterizing and Detecting Money Laundering Activities on the Bitcoin Network

Dec 27, 2019
Yining Hu, Suranga Seneviratne, Kanchana Thilakarathna, Kensuke Fukuda, Aruna Seneviratne

Figure 1 for Characterizing and Detecting Money Laundering Activities on the Bitcoin Network
Figure 2 for Characterizing and Detecting Money Laundering Activities on the Bitcoin Network
Figure 3 for Characterizing and Detecting Money Laundering Activities on the Bitcoin Network
Figure 4 for Characterizing and Detecting Money Laundering Activities on the Bitcoin Network

Bitcoin is by far the most popular crypto-currency solution enabling peer-to-peer payments. Despite some studies highlighting the network does not provide full anonymity, it is still being heavily used for a wide variety of dubious financial activities such as money laundering, ponzi schemes, and ransom-ware payments. In this paper, we explore the landscape of potential money laundering activities occurring across the Bitcoin network. Using data collected over three years, we create transaction graphs and provide an in-depth analysis on various graph characteristics to differentiate money laundering transactions from regular transactions. We found that the main difference between laundering and regular transactions lies in their output values and neighbourhood information. Then, we propose and evaluate a set of classifiers based on four types of graph features: immediate neighbours, curated features, deepwalk embeddings, and node2vec embeddings to classify money laundering and regular transactions. Results show that the node2vec-based classifier outperforms other classifiers in binary classification reaching an average accuracy of 92.29% and an F1-measure of 0.93 and high robustness over a 2.5-year time span. Finally, we demonstrate how effective our classifiers are in discovering unknown laundering services. The classifier performance dropped compared to binary classification, however, the prediction can be improved with simple ensemble techniques for some services.

Viaarxiv icon

DeepCaps: Going Deeper with Capsule Networks

Apr 21, 2019
Jathushan Rajasegaran, Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Suranga Seneviratne, Ranga Rodrigo

Figure 1 for DeepCaps: Going Deeper with Capsule Networks
Figure 2 for DeepCaps: Going Deeper with Capsule Networks
Figure 3 for DeepCaps: Going Deeper with Capsule Networks
Figure 4 for DeepCaps: Going Deeper with Capsule Networks

Capsule Network is a promising concept in deep learning, yet its true potential is not fully realized thus far, providing sub-par performance on several key benchmark datasets with complex data. Drawing intuition from the success achieved by Convolutional Neural Networks (CNNs) by going deeper, we introduce DeepCaps1, a deep capsule network architecture which uses a novel 3D convolution based dynamic routing algorithm. With DeepCaps, we surpass the state-of-the-art results in the capsule network domain on CIFAR10, SVHN and Fashion MNIST, while achieving a 68% reduction in the number of parameters. Further, we propose a class-independent decoder network, which strengthens the use of reconstruction loss as a regularization term. This leads to an interesting property of the decoder, which allows us to identify and control the physical attributes of the images represented by the instantiation parameters.

Viaarxiv icon

TextCaps : Handwritten Character Recognition with Very Small Datasets

Apr 17, 2019
Vinoj Jayasundara, Sandaru Jayasekara, Hirunima Jayasekara, Jathushan Rajasegaran, Suranga Seneviratne, Ranga Rodrigo

Figure 1 for TextCaps : Handwritten Character Recognition with Very Small Datasets
Figure 2 for TextCaps : Handwritten Character Recognition with Very Small Datasets
Figure 3 for TextCaps : Handwritten Character Recognition with Very Small Datasets
Figure 4 for TextCaps : Handwritten Character Recognition with Very Small Datasets

Many localized languages struggle to reap the benefits of recent advancements in character recognition systems due to the lack of substantial amount of labeled training data. This is due to the difficulty in generating large amounts of labeled data for such languages and inability of deep learning techniques to properly learn from small number of training samples. We solve this problem by introducing a technique of generating new training samples from the existing samples, with realistic augmentations which reflect actual variations that are present in human hand writing, by adding random controlled noise to their corresponding instantiation parameters. Our results with a mere 200 training samples per class surpass existing character recognition results in the EMNIST-letter dataset while achieving the existing results in the three datasets: EMNIST-balanced, EMNIST-digits, and MNIST. We also develop a strategy to effectively use a combination of loss functions to improve reconstructions. Our system is useful in character recognition for localized languages that lack much labeled training data and even in other related more general contexts such as object recognition.

* Jayasundara, Vinoj, et al., 2019, January. TextCaps: Handwritten Character Recognition With Very Small Datasets. In 2019 IEEE Winter Conference on Applications of Computer Vision (WACV) (pp. 254-262). IEEE  
Viaarxiv icon

A Neural Embeddings Approach for Detecting Mobile Counterfeit Apps

Apr 26, 2018
Jathushan Rajasegaran, Suranga Seneviratne, Guillaume Jourjon

Figure 1 for A Neural Embeddings Approach for Detecting Mobile Counterfeit Apps
Figure 2 for A Neural Embeddings Approach for Detecting Mobile Counterfeit Apps
Figure 3 for A Neural Embeddings Approach for Detecting Mobile Counterfeit Apps
Figure 4 for A Neural Embeddings Approach for Detecting Mobile Counterfeit Apps

Counterfeit apps impersonate existing popular apps in attempts to misguide users to install them for various reasons such as collecting personal information, spreading malware, or simply to increase their advertisement revenue. Many counterfeits can be identified once installed, however even a tech-savvy user may struggle to detect them before installation as app icons and descriptions can be quite similar to the original app. To this end, this paper proposes to use neural embeddings generated by state-of-the-art convolutional neural networks (CNNs) to measure the similarity between images. Our results show that for the problem of counterfeit detection a novel approach of using style embeddings given by the Gram matrix of CNN filter responses outperforms baseline methods such as content embeddings and SIFT features. We show that further performance increases can be achieved by combining style embeddings with content embeddings. We present an analysis of approximately 1.2 million apps from Google Play Store and identify a set of potential counterfeits for top-1,000 apps. Under a conservative assumption, we were able to find 139 apps that contain malware in a set of 6,880 apps that showed high visual similarity to one of the top-1,000 apps in Google Play Store.

Viaarxiv icon