Alert button
Picture for Alan F. Smeaton

Alan F. Smeaton

Alert button

Video Anomaly Detection via Spatio-Temporal Pseudo-Anomaly Generation : A Unified Approach

Nov 27, 2023
Ayush K. Rai, Tarun Krishna, Feiyan Hu, Alexandru Drimbarean, Kevin McGuinness, Alan F. Smeaton, Noel E. O'Connor

Video Anomaly Detection (VAD) is an open-set recognition task, which is usually formulated as a one-class classification (OCC) problem, where training data is comprised of videos with normal instances while test data contains both normal and anomalous instances. Recent works have investigated the creation of pseudo-anomalies (PAs) using only the normal data and making strong assumptions about real-world anomalies with regards to abnormality of objects and speed of motion to inject prior information about anomalies in an autoencoder (AE) based reconstruction model during training. This work proposes a novel method for generating generic spatio-temporal PAs by inpainting a masked out region of an image using a pre-trained Latent Diffusion Model and further perturbing the optical flow using mixup to emulate spatio-temporal distortions in the data. In addition, we present a simple unified framework to detect real-world anomalies under the OCC setting by learning three types of anomaly indicators, namely reconstruction quality, temporal irregularity and semantic inconsistency. Extensive experiments on four VAD benchmark datasets namely Ped2, Avenue, ShanghaiTech and UBnormal demonstrate that our method performs on par with other existing state-of-the-art PAs generation and reconstruction based methods under the OCC setting. Our analysis also examines the transferability and generalisation of PAs across these datasets, offering valuable insights by identifying real-world anomalies through PAs.

* 16 pages, 8 figures 
Viaarxiv icon

A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?

Nov 10, 2023
Siddhant Jaydeep Mahajani, Shashank Srivastava, Alan F. Smeaton

Figure 1 for A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?
Figure 2 for A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?
Figure 3 for A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?
Figure 4 for A Comparison of Lexicon-Based and ML-Based Sentiment Analysis: Are There Outlier Words?

Lexicon-based approaches to sentiment analysis of text are based on each word or lexical entry having a pre-defined weight indicating its sentiment polarity. These are usually manually assigned but the accuracy of these when compared against machine leaning based approaches to computing sentiment, are not known. It may be that there are lexical entries whose sentiment values cause a lexicon-based approach to give results which are very different to a machine learning approach. In this paper we compute sentiment for more than 150,000 English language texts drawn from 4 domains using the Hedonometer, a lexicon-based technique and Azure, a contemporary machine-learning based approach which is part of the Azure Cognitive Services family of APIs which is easy to use. We model differences in sentiment scores between approaches for documents in each domain using a regression and analyse the independent variables (Hedonometer lexical entries) as indicators of each word's importance and contribution to the score differences. Our findings are that the importance of a word depends on the domain and there are no standout lexical entries which systematically cause differences in sentiment scores.

* 4 pages, to appear in Proceedings of the 31st Irish Conference on Artificial Intelligence and Cognitive Science. December 7th-8th, 2023 
Viaarxiv icon

Heart Rate Detection Using an Event Camera

Sep 21, 2023
Aniket Jagtap, RamaKrishna Venkatesh Saripalli, Joe Lemley, Waseem Shariff, Alan F. Smeaton

Figure 1 for Heart Rate Detection Using an Event Camera
Figure 2 for Heart Rate Detection Using an Event Camera
Figure 3 for Heart Rate Detection Using an Event Camera
Figure 4 for Heart Rate Detection Using an Event Camera

Event cameras, also known as neuromorphic cameras, are an emerging technology that offer advantages over traditional shutter and frame-based cameras, including high temporal resolution, low power consumption, and selective data acquisition. In this study, we propose to harnesses the capabilities of event-based cameras to capture subtle changes in the surface of the skin caused by the pulsatile flow of blood in the wrist region. We investigate whether an event camera could be used for continuous noninvasive monitoring of heart rate (HR). Event camera video data from 25 participants, comprising varying age groups and skin colours, was collected and analysed. Ground-truth HR measurements obtained using conventional methods were used to evaluate of the accuracy of automatic detection of HR from event camera data. Our experimental results and comparison to the performance of other non-contact HR measurement methods demonstrate the feasibility of using event cameras for pulse detection. We also acknowledge the challenges and limitations of our method, such as light-induced flickering and the sub-conscious but naturally-occurring tremors of an individual during data capture.

* Dataset available at https://doi.org/10.6084/m9.figshare.24039501.v1 
Viaarxiv icon

Using Saliency and Cropping to Improve Video Memorability

Sep 21, 2023
Vaibhav Mudgal, Qingyang Wang, Lorin Sweeney, Alan F. Smeaton

Figure 1 for Using Saliency and Cropping to Improve Video Memorability
Figure 2 for Using Saliency and Cropping to Improve Video Memorability
Figure 3 for Using Saliency and Cropping to Improve Video Memorability
Figure 4 for Using Saliency and Cropping to Improve Video Memorability

Video memorability is a measure of how likely a particular video is to be remembered by a viewer when that viewer has no emotional connection with the video content. It is an important characteristic as videos that are more memorable are more likely to be shared, viewed, and discussed. This paper presents results of a series of experiments where we improved the memorability of a video by selectively cropping frames based on image saliency. We present results of a basic fixed cropping as well as the results from dynamic cropping where both the size of the crop and the position of the crop within the frame, move as the video is played and saliency is tracked. Our results indicate that especially for videos of low initial memorability, the memorability score can be improved.

* 12 pages 
Viaarxiv icon

Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset

Sep 14, 2023
Iya Chivileva, Philip Lynch, Tomas E. Ward, Alan F. Smeaton

Figure 1 for Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset
Figure 2 for Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset
Figure 3 for Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset
Figure 4 for Measuring the Quality of Text-to-Video Model Outputs: Metrics and Dataset

Evaluating the quality of videos generated from text-to-video (T2V) models is important if they are to produce plausible outputs that convince a viewer of their authenticity. We examine some of the metrics used in this area and highlight their limitations. The paper presents a dataset of more than 1,000 generated videos from 5 very recent T2V models on which some of those commonly used quality metrics are applied. We also include extensive human quality evaluations on those videos, allowing the relative strengths and weaknesses of metrics, including human assessment, to be compared. The contribution is an assessment of commonly used quality metrics, and a comparison of their performances and the performance of human evaluations on an open dataset of T2V videos. Our conclusion is that naturalness and semantic matching with the text prompt used to generate the T2V output are important but there is no single measure to capture these subtleties in assessing T2V model output.

* 13 pages 
Viaarxiv icon

Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers

Jul 16, 2023
Hamza Riaz, Alan F. Smeaton

Figure 1 for Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers
Figure 2 for Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers
Figure 3 for Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers
Figure 4 for Domain Generalisation with Bidirectional Encoder Representations from Vision Transformers

Domain generalisation involves pooling knowledge from source domain(s) into a single model that can generalise to unseen target domain(s). Recent research in domain generalisation has faced challenges when using deep learning models as they interact with data distributions which differ from those they are trained on. Here we perform domain generalisation on out-of-distribution (OOD) vision benchmarks using vision transformers. Initially we examine four vision transformer architectures namely ViT, LeViT, DeiT, and BEIT on out-of-distribution data. As the bidirectional encoder representation from image transformers (BEIT) architecture performs best, we use it in further experiments on three benchmarks PACS, Home-Office and DomainNet. Our results show significant improvements in validation and test accuracy and our implementation significantly overcomes gaps between within-distribution and OOD data.

* 4 pages, accepted at the Irish Machine Vision and Image Processing Conference (IMVIP), Galway, August 2023 
Viaarxiv icon

Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing

Jul 14, 2023
Xiao Liu, Alessandra Mileo, Alan F. Smeaton

Figure 1 for Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing
Figure 2 for Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing
Figure 3 for Defect Classification in Additive Manufacturing Using CNN-Based Vision Processing

The development of computer vision and in-situ monitoring using visual sensors allows the collection of large datasets from the additive manufacturing (AM) process. Such datasets could be used with machine learning techniques to improve the quality of AM. This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model. This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.

* 4 pages, accepted at the Irish Machine Vision and Image Processing Conference (IMVIP), Galway, August 2023 
Viaarxiv icon

Calculating the matrix profile from noisy data

Jun 16, 2023
Colin Hehir, Alan F. Smeaton

Figure 1 for Calculating the matrix profile from noisy data
Figure 2 for Calculating the matrix profile from noisy data
Figure 3 for Calculating the matrix profile from noisy data
Figure 4 for Calculating the matrix profile from noisy data

The matrix profile (MP) is a data structure computed from a time series which encodes the data required to locate motifs and discords, corresponding to recurring patterns and outliers respectively. When the time series contains noisy data then the conventional approach is to pre-filter it in order to remove noise but this cannot apply in unsupervised settings where patterns and outliers are not annotated. The resilience of the algorithm used to generate the MP when faced with noisy data remains unknown. We measure the similarities between the MP from original time series data with MPs generated from the same data with noisy data added under a range of parameter settings including adding duplicates and adding irrelevant data. We use three real world data sets drawn from diverse domains for these experiments Based on dissimilarities between the MPs, our results suggest that MP generation is resilient to a small amount of noise being introduced into the data but as the amount of noise increases this resilience disappears

* PLoS ONE 18(6): e0286763  
* 16 pages 
Viaarxiv icon

Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks

May 09, 2023
Deniss Strods, Alan F. Smeaton

Figure 1 for Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks
Figure 2 for Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks
Figure 3 for Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks
Figure 4 for Enhancing Gappy Speech Audio Signals with Generative Adversarial Networks

Gaps, dropouts and short clips of corrupted audio are a common problem and particularly annoying when they occur in speech. This paper uses machine learning to regenerate gaps of up to 320ms in an audio speech signal. Audio regeneration is translated into image regeneration by transforming audio into a Mel-spectrogram and using image in-painting to regenerate the gaps. The full Mel-spectrogram is then transferred back to audio using the Parallel-WaveGAN vocoder and integrated into the audio stream. Using a sample of 1300 spoken audio clips of between 1 and 10 seconds taken from the publicly-available LJSpeech dataset our results show regeneration of audio gaps in close to real time using GANs with a GPU equipped system. As expected, the smaller the gap in the audio, the better the quality of the filled gaps. On a gap of 240ms the average mean opinion score (MOS) for the best performing models was 3.737, on a scale of 1 (worst) to 5 (best) which is sufficient for a human to perceive as close to uninterrupted human speech.

* 7 pages, 4 figures, 4 tables. 34th Irish Signals and Systems Conferences, 13-14 June 2023 
Viaarxiv icon

Automatic Detection of Signalling Behaviour from Assistance Dogs as they Forecast the Onset of Epileptic Seizures in Humans

Mar 11, 2023
Hitesh Raju, Ankit Sharma, Aoife Smeaton, Alan F. Smeaton

Figure 1 for Automatic Detection of Signalling Behaviour from Assistance Dogs as they Forecast the Onset of Epileptic Seizures in Humans
Figure 2 for Automatic Detection of Signalling Behaviour from Assistance Dogs as they Forecast the Onset of Epileptic Seizures in Humans
Figure 3 for Automatic Detection of Signalling Behaviour from Assistance Dogs as they Forecast the Onset of Epileptic Seizures in Humans
Figure 4 for Automatic Detection of Signalling Behaviour from Assistance Dogs as they Forecast the Onset of Epileptic Seizures in Humans

Epilepsy or the occurrence of epileptic seizures, is one of the world's most well-known neurological disorders affecting millions of people. Seizures mostly occur due to non-coordinated electrical discharges in the human brain and may cause damage, including collapse and loss of consciousness. If the onset of a seizure can be forecast then the subject can be placed into a safe environment or position so that self-injury as a result of a collapse can be minimised. However there are no definitive methods to predict seizures in an everyday, uncontrolled environment. Previous studies have shown that pet dogs have the ability to detect the onset of an epileptic seizure by scenting the characteristic volatile organic compounds exuded through the skin by a subject prior a seizure occurring and there are cases where assistance dogs, trained to scent the onset of a seizure, can signal this to their owner/trainer. In this work we identify how we can automatically detect the signalling behaviours of trained assistance dogs and use this to alert their owner. Using data from an accelerometer worn on the collar of a dog we describe how we gathered movement data from 11 trained dogs for a total of 107 days as they exhibited signalling behaviour on command. We present the machine learning techniques used to accurately detect signalling from routine dog behaviour. This work is a step towards automatic alerting of the likely onset of an epileptic seizure from the signalling behaviour of a trained assistance dog.

* The 38th ACM/SIGAPP Symposium on Applied Computing (SAC '23), March 27-April 2, 2023, Tallinn, Estonia  
* 8 pages, 5 tables, 6 figures 
Viaarxiv icon