This paper describes additional aspects of a digital tool called the 'Textual History Tool'. We describe its various salient features with special reference to those of its features that may help the philologist digitize commentaries and sub-commentaries on a text. This tool captures the historical evolution of a text through various temporal stages, and interrelated data culled from various types of related texts. We use the text of the K\=a\'sik\=avrtti (KV) as a sample text, and with the help of philologists, we digitize the commentaries available to us. We digitize the Ny\=asa (Ny), the Padama\~njar\=i (Pm) and sub commentaries on the KV text known as the Tantraprad\=ipa (Tp), and the Makaranda (Mk). We divide each commentary and sub-commentary into functional units and describe the methodology and motivation behind the functional unit division. Our functional unit division helps generate more accurate phylogenetic trees for the text, based on distance methods using the data entered in the tool.
Text analysis includes lexical analysis of the text and has been widely studied and used in diverse applications. In the last decade, researchers have proposed many efficient solutions to analyze / classify large text dataset, however, analysis / classification of short text is still a challenge because 1) the data is very sparse 2) It contains noise words and 3) It is difficult to understand the syntactical structure of the text. Short Messaging Service (SMS) is a text messaging service for mobile/smart phone and this service is frequently used by all mobile users. Because of the popularity of SMS service, marketing companies nowadays are also using this service for direct marketing also known as SMS marketing.In this paper, we have proposed Ontology based SMS Controller which analyze the text message and classify it using ontology aslegitimate or spam. The proposed system has been tested on different scenarios and experimental results shows that the proposed solution is effective both in terms of efficiency and time.
Recently, the advancement in industrial automation and high-speed printing has raised numerous challenges related to the printing quality inspection of final products. This paper proposes a machine vision based technique to assess the printing quality of text on industrial objects. The assessment is based on three quality defects such as text misalignment, varying printing shades, and misprinted text. The proposed scheme performs the quality inspection through stochastic assessment technique based on the second-order statistics of printing. First: the text-containing area on printed product is identified through image processing techniques. Second: the alignment testing of the identified text-containing area is performed. Third: optical character recognition is performed to divide the text into different small boxes and only the intensity value of each text-containing box is taken as a random variable and second-order statistics are estimated to determine the varying printing defects in the text under one, two and three sigma thresholds. Fourth: the K-Nearest Neighbors based supervised machine learning is performed to provide the stochastic process for misprinted text detection. Finally, the technique is deployed on an industrial image for the printing quality assessment with varying values of n and m. The results have shown that the proposed SAML-QC technique can perform real-time automated inspection for industrial printing.
Text classification has long been a staple in natural language processing with applications spanning across sentiment analysis, online content tagging, recommender systems and spam detection. However, text classification, by nature, suffers from a variety of issues stemming from dataset imbalance, text ambiguity, subjectivity and the lack of linguistic context in the data. In this paper, we explore the use of text ranking, commonly used in information retrieval, to carry out challenging classification-based tasks. We propose a novel end-to-end ranking approach consisting of a Transformer network responsible for producing representations for a pair of text sequences, which are in turn passed into a context aggregating network outputting ranking scores used to determine an ordering to the sequences based on some notion of relevance. We perform numerous experiments on publicly-available datasets and investigate the possibility of applying our ranking approach to certain problems often addressed using classification. In an experiment on a heavily-skewed sentiment analysis dataset, converting ranking results to classification labels yields an approximately 22% improvement over state-of-the-art text classification, demonstrating the efficacy of text ranking over text classification in certain scenarios.
In this paper we propose to learn a multimodal image and text embedding from Web and Social Media data, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the pipeline can learn from images with associated text without supervision and perform a thourough analysis of five different text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text based image retrieval task, and we clearly outperform state of the art in the MIRFlickr dataset when training in the target data. Further we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.
Complex language models trained on huge text corpora have shown unparalleled text generation capabilities, and thanks to transfer learning, are accessible to a greater number. However, despite recent developments, users are not yet able to fully control particular aspects of the text produced. This is why we propose a finetuned OpenAI GPT-2 model for controllable and contextualised text generation specific to novels. By integrating it into a web-service, we would like to enable authors to write and ask for automatic text generation which is consistent with both previous and next paragraphs. They can specify the genre of their book, the length of the desired text, the entities it should mention and its content via keywords or a short summary. We explore the technical possibilities and limitations around these objectives.
Text based Visual Question Answering (TextVQA) is a recently raised challenge that requires a machine to read text in images and answer natural language questions by jointly reasoning over the question, Optical Character Recognition (OCR) tokens and visual content. Most of the state-of-the-art (SoTA) VQA methods fail to answer these questions because of i) poor text reading ability; ii) lacking of text-visual reasoning capacity; and iii) adopting a discriminative answering mechanism instead of a generative one which is hard to cover both OCR tokens and general text tokens in the final answer. In this paper, we propose a structured multimodal attention (SMA) neural network to solve the above issues. Our SMA first uses a structural graph representation to encode the object-object, object-text and text-text relationships appearing in the image, and then design a multimodal graph attention network to reason over it. Finally, the outputs from the above module are processed by a global-local attentional answering module to produce an answer that covers tokens from both OCR and general text iteratively. Our proposed model outperforms the SoTA models on TextVQA dataset and all three tasks of ST-VQA dataset. To provide an upper bound for our method and a fair testing base for further works, we also provide human-annotated ground-truth OCR annotations for the TextVQA dataset, which were not given in the original release.
Scene text detection is still a challenging task, as there may be extremely small or low-resolution strokes, and close or arbitrary-shaped texts. In this paper, StrokeNet is proposed to effectively detect the texts by capturing the fine-grained strokes, and infer structural relations between the hierarchical representation in the graph. Different from existing approaches that represent the text area by a series of points or rectangular boxes, we directly localize strokes of each text instance through Stroke Assisted Prediction Network (SAPN). Besides, Hierarchical Relation Graph Network (HRGN) is adopted to perform relational reasoning and predict the likelihood of linkages, effectively splitting the close text instances and grouping node classification results into arbitrary-shaped text region. We introduce a novel dataset with stroke-level annotations, namely SynthStroke, for offline pre-training of our model. Experiments on wide-ranging benchmarks verify the State-of-the-Art performance of our method. Our dataset and code will be available.
Can the analysis of the semantics of words used in the text of a scientific paper predict its future impact measured by citations? This study details examples of automated text classification that achieved 80% success rate in distinguishing between highly-cited and little-cited articles. Automated intelligent systems allow the identification of promising works that could become influential in the scientific community. The problems of quantifying the meaning of texts and representation of human language have been clear since the inception of Natural Language Processing. This paper presents a novel method for vector representation of text meaning based on information theory and show how this informational semantics is used for text classification on the basis of the Leicester Scientific Corpus. We describe the experimental framework used to evaluate the impact of scientific articles through their informational semantics. Our interest is in citation classification to discover how important semantics of texts are in predicting the citation count. We propose the semantics of texts as an important factor for citation prediction. For each article, our system extracts the abstract of paper, represents the words of the abstract as vectors in Meaning Space, automatically analyses the distribution of scientific categories (Web of Science categories) within the text of abstract, and then classifies papers according to citation counts (highly-cited, little-cited). We show that an informational approach to representing the meaning of a text has offered a way to effectively predict the scientific impact of research papers.