Abstract:In social network service platforms, crime suspects are likely to use cybercrime coded words for communication by adding criminal meanings to existing words or replacing them with similar words. For instance, the word 'ice' is often used to mean methamphetamine in drug crimes. To analyze the nature of cybercrime and the behavior of criminals, quickly detecting such words and further understanding their meaning are critical. In the automated cybercrime coded word detection problem, it is difficult to collect a sufficient amount of training data for supervised learning and to directly apply language models that utilize context information to better understand natural language. To overcome these limitations, we propose a new two-step approach, in which a mean latent vector is constructed for each cybercrime through one of five different AutoEncoder models in the first step, and cybercrime coded words are detected based on multi-level latent representations in the second step. Moreover, to deeply understand cybercrime coded words detected through the two-step approach, we propose three novel methods: (1) Detection of new words recently coined, (2) Detection of words frequently appeared in both drug and sex crimes, and (3) Automatic generation of word taxonomy. According to our experimental results, among various AutoEncoder models, the stacked AutoEncoder model shows the best performance. Additionally, the F1-score of the two-step approach is 0.991, which is higher than 0.987 and 0.903 of the existing dark-GloVe and dark-BERT models. By analyzing the experimental results of the three proposed methods, we can gain a deeper understanding of drug and sex crimes.
Abstract:Nowadays, pre-trained sequence-to-sequence models such as BERTSUM and BART have shown state-of-the-art results in abstractive summarization. In these models, during fine-tuning, the encoder transforms sentences to context vectors in the latent space and the decoder learns the summary generation task based on the context vectors. In our approach, we consider two clusters of salient and non-salient context vectors, using which the decoder can attend more to salient context vectors for summary generation. For this, we propose a novel clustering transformer layer between the encoder and the decoder, which first generates two clusters of salient and non-salient vectors, and then normalizes and shrinks the clusters to make them apart in the latent space. Our experimental result shows that the proposed model outperforms the existing BART model by learning these distinct cluster patterns, improving up to 4% in ROUGE and 0.3% in BERTScore on average in CNN/DailyMail and XSUM data sets.