Alert button
Picture for Jay Desai

Jay Desai

Alert button

Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports

Aug 30, 2023
İrem Üstek, Jay Desai, Iván López Torrecillas, Sofiane Abadou, Jinjie Wang, Quentin Fever, Sandhya Rani Kasthuri, Yang Xing, Weisi Guo, Antonios Tsourdos

Figure 1 for Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports
Figure 2 for Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports
Figure 3 for Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports
Figure 4 for Two-Stage Violence Detection Using ViTPose and Classification Models at Smart Airports

This study introduces an innovative violence detection framework tailored to the unique requirements of smart airports, where prompt responses to violent situations are crucial. The proposed framework harnesses the power of ViTPose for human pose estimation. It employs a CNN - BiLSTM network to analyse spatial and temporal information within keypoints sequences, enabling the accurate classification of violent behaviour in real time. Seamlessly integrated within the SAFE (Situational Awareness for Enhanced Security framework of SAAB, the solution underwent integrated testing to ensure robust performance in real world scenarios. The AIRTLab dataset, characterized by its high video quality and relevance to surveillance scenarios, is utilized in this study to enhance the model's accuracy and mitigate false positives. As airports face increased foot traffic in the post pandemic era, implementing AI driven violence detection systems, such as the one proposed, is paramount for improving security, expediting response times, and promoting data informed decision making. The implementation of this framework not only diminishes the probability of violent events but also assists surveillance teams in effectively addressing potential threats, ultimately fostering a more secure and protected aviation sector. Codes are available at: https://github.com/Asami-1/GDP.

Viaarxiv icon

S2vNTM: Semi-supervised vMF Neural Topic Modeling

Jul 06, 2023
Weijie Xu, Jay Desai, Srinivasan Sengamedu, Xiaoyu Jiang, Francis Iannacci

Figure 1 for S2vNTM: Semi-supervised vMF Neural Topic Modeling
Figure 2 for S2vNTM: Semi-supervised vMF Neural Topic Modeling
Figure 3 for S2vNTM: Semi-supervised vMF Neural Topic Modeling
Figure 4 for S2vNTM: Semi-supervised vMF Neural Topic Modeling

Language model based methods are powerful techniques for text classification. However, the models have several shortcomings. (1) It is difficult to integrate human knowledge such as keywords. (2) It needs a lot of resources to train the models. (3) It relied on large text data to pretrain. In this paper, we propose Semi-Supervised vMF Neural Topic Modeling (S2vNTM) to overcome these difficulties. S2vNTM takes a few seed keywords as input for topics. S2vNTM leverages the pattern of keywords to identify potential topics, as well as optimize the quality of topics' keywords sets. Across a variety of datasets, S2vNTM outperforms existing semi-supervised topic modeling methods in classification accuracy with limited keywords provided. S2vNTM is at least twice as fast as baselines.

* ICLR Workshop 2023  
* 17 pages, 9 figures, ICLR Workshop 2023. arXiv admin note: text overlap with arXiv:2307.01226 
Viaarxiv icon

KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation

Jul 04, 2023
Weijie Xu, Xiaoyu Jiang, Jay Desai, Bin Han, Fuqin Yan, Francis Iannacci

Figure 1 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 2 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 3 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation
Figure 4 for KDSTM: Neural Semi-supervised Topic Modeling with Knowledge Distillation

In text classification tasks, fine tuning pretrained language models like BERT and GPT-3 yields competitive accuracy; however, both methods require pretraining on large text datasets. In contrast, general topic modeling methods possess the advantage of analyzing documents to extract meaningful patterns of words without the need of pretraining. To leverage topic modeling's unsupervised insights extraction on text classification tasks, we develop the Knowledge Distillation Semi-supervised Topic Modeling (KDSTM). KDSTM requires no pretrained embeddings, few labeled documents and is efficient to train, making it ideal under resource constrained settings. Across a variety of datasets, our method outperforms existing supervised topic modeling methods in classification accuracy, robustness and efficiency and achieves similar performance compare to state of the art weakly supervised text classification methods.

* ICLR 2022 Workshop PML4DC  
* 12 pages, 4 figures, ICLR 2022 Workshop 
Viaarxiv icon

Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition

Mar 03, 2022
Jay Desai, Houwei Cao, Ravi Shah

Figure 1 for Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition
Figure 2 for Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition
Figure 3 for Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition
Figure 4 for Attention-based Region of Interest (ROI) Detection for Speech Emotion Recognition

Automatic emotion recognition for real-life appli-cations is a challenging task. Human emotion expressions aresubtle, and can be conveyed by a combination of several emo-tions. In most existing emotion recognition studies, each audioutterance/video clip is labelled/classified in its entirety. However,utterance/clip-level labelling and classification can be too coarseto capture the subtle intra-utterance/clip temporal dynamics. Forexample, an utterance/video clip usually contains only a fewemotion-salient regions and many emotionless regions. In thisstudy, we propose to use attention mechanism in deep recurrentneural networks to detection the Regions-of-Interest (ROI) thatare more emotionally salient in human emotional speech/video,and further estimate the temporal emotion dynamics by aggre-gating those emotionally salient regions-of-interest. We comparethe ROI from audio and video and analyse them. We comparethe performance of the proposed attention networks with thestate-of-the-art LSTM models on multi-class classification task ofrecognizing six basic human emotions, and the proposed attentionmodels exhibit significantly better performance. Furthermore, theattention weight distribution can be used to interpret how anutterance can be expressed as a mixture of possible emotions.

* Paper written in 2019 
Viaarxiv icon