Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Analysis Tool for UNL-Based Knowledge Representation

May 04, 2014
Shamim Ripon, Aoyan Barua, Mohammad Salah Uddin

The fundamental issue in knowledge representation is to provide a precise definition of the knowledge that they possess in a manner that is independent of procedural considerations, context free and easy to manipulate, exchange and reason about. Knowledge must be accessible to everyone regardless of their native languages. Universal Networking Language (UNL) is a declarative formal language and a generalized form of human language in a machine independent digital platform for defining, recapitulating, amending, storing and dissipating knowledge among people of different affiliations. UNL extracts semantic data from a native language for Interlingua machine translation. This paper presents the development of a graphical tool that incorporates UNL to provide a visual mean to represent the semantic data available in a native text. UNL represents the semantics of a sentence as a conceptual hyper-graph. We translate this information into XML format and create a graph from XML, representing the actual concepts available in the native language

* Journal of Advanced Computer Science and Technology Research (JACSTR) Vol. 2, No. 4, pp. 176-183, 2012 
* 8 pages, 5 figures. arXiv admin note: text overlap with arXiv:cs/0404030 by other authors 

  Access Paper or Ask Questions

Analysing Temporally Annotated Corpora with CAVaT

Mar 22, 2012
Leon Derczynski, Robert Gaizauskas

We present CAVaT, a tool that performs Corpus Analysis and Validation for TimeML. CAVaT is an open source, modular checking utility for statistical analysis of features specific to temporally-annotated natural language corpora. It provides reporting, highlights salient links between a variety of general and time-specific linguistic features, and also validates a temporal annotation to ensure that it is logically consistent and sufficiently annotated. Uniquely, CAVaT provides analysis specific to TimeML-annotated temporal information. TimeML is a standard for annotating temporal information in natural language text. In this paper, we present the reporting part of CAVaT, and then its error-checking ability, including the workings of several novel TimeML document verification methods. This is followed by the execution of some example tasks using the tool to show relations between times, events, signals and links. We also demonstrate inconsistencies in a TimeML corpus (TimeBank) that have been detected with CAVaT.

* Proc. LREC (2010) 398-404 

  Access Paper or Ask Questions

A Novel Approach for Pass Word Authentication using Brain -State -In -A Box (BSB) Model

Oct 07, 2011
A. S. N. Chakravarthy, Penmetsa V. Krishna Raja, P. S Avadhani

Authentication is the act of confirming the truth of an attribute of a datum or entity. This might involve confirming the identity of a person, tracing the origins of an artefact, ensuring that a product is what it's packaging and labelling claims to be, or assuring that a computer program is a trusted one. The authentication of information can pose special problems (especially man-in-the-middle attacks), and is often wrapped up with authenticating identity. Password authentication using Brain-State -In-A Box is presented in this paper. Here in this paper we discuss Brain-State -In-A Box Scheme for Textual and graphical passwords which will be converted in to probabilistic values Password. We observe how to get password authentication Probabilistic values for Text and Graphical image. This study proposes the use of a Brain-State -In-A Box technique for password authentication. In comparison to existing layered neural network techniques, the proposed method provides better accuracy and quicker response time to registration and password changes.

* International Journal of Computer Science and Information Technologies (IJCSIT), Volume 2 Issue 5 2011,2127-2131 
* five pages 

  Access Paper or Ask Questions

A Self-Contained and Easily Accessible Discussion of the Method of Descente Infinie and Fermat's Only Explicitly Known Proof by Descente Infinie

Dec 14, 2010
Claus-Peter Wirth

We present the only proof of Pierre Fermat by descente infinie that is known to exist today. As the text of its Latin original requires active mathematical interpretation, it is more a proof sketch than a proper mathematical proof. We discuss descente infinie from the mathematical, logical, historical, linguistic, and refined logic-historical points of view. We provide the required preliminaries from number theory and develop a self-contained proof in a modern form, which nevertheless is intended to follow Fermat's ideas closely. We then annotate an English translation of Fermat's original proof with terms from the modern proof. Including all important facts, we present a concise and self-contained discussion of Fermat's proof sketch, which is easily accessible to laymen in number theory as well as to laymen in the history of mathematics, and which provides new clarification of the Method of Descente Infinie to the experts in these fields. Last but not least, this paper fills a gap regarding the easy accessibility of the subject.

* ii + 36 pages, French abstract (R\'esum\'e) included in paper 

  Access Paper or Ask Questions

CONSENT: Context Sensitive Transformer for Bold Words Classification

May 16, 2022
Ionut-Catalin Sandu, Daniel Voinea, Alin-Ionut Popa

We present CONSENT, a simple yet effective CONtext SENsitive Transformer framework for context-dependent object classification within a fully-trainable end-to-end deep learning pipeline. We exemplify the proposed framework on the task of bold words detection proving state-of-the-art results. Given an image containing text of unknown font-types (e.g. Arial, Calibri, Helvetica), unknown language, taken under various degrees of illumination, angle distortion and scale variation, we extract all the words and learn a context-dependent binary classification (i.e. bold versus non-bold) using an end-to-end transformer-based neural network ensemble. To prove the extensibility of our framework, we demonstrate competitive results against state-of-the-art for the game of rock-paper-scissors by training the model to determine the winner given a sequence with $2$ pictures depicting hand poses.


  Access Paper or Ask Questions

A Computational Acquisition Model for Multimodal Word Categorization

May 12, 2022
Uri Berger, Gabriel Stanovsky, Omri Abend, Lea Frermann

Recent advances in self-supervised modeling of text and images open new opportunities for computational models of child language acquisition, which is believed to rely heavily on cross-modal signals. However, prior studies have been limited by their reliance on vision models trained on large image datasets annotated with a pre-defined set of depicted object categories. This is (a) not faithful to the information children receive and (b) prohibits the evaluation of such models with respect to category learning tasks, due to the pre-imposed category structure. We address this gap, and present a cognitively-inspired, multimodal acquisition model, trained from image-caption pairs on naturalistic data using cross-modal self-supervision. We show that the model learns word categories and object recognition abilities, and presents trends reminiscent of those reported in the developmental literature. We make our code and trained models public for future reference and use.

* Accepted to NAACL 2022 

  Access Paper or Ask Questions

A Structured Span Selector

May 08, 2022
Tianyu Liu, Yuchen Eleanor Jiang, Ryan Cotterell, Mrinmaya Sachan

Many natural language processing tasks, e.g., coreference resolution and semantic role labeling, require selecting text spans and making decisions about them. A typical approach to such tasks is to score all possible spans and greedily select spans for task-specific downstream processing. This approach, however, does not incorporate any inductive bias about what sort of spans ought to be selected, e.g., that selected spans tend to be syntactic constituents. In this paper, we propose a novel grammar-based structured span selection model which learns to make use of the partial span-level annotation provided for such problems. Compared to previous approaches, our approach gets rid of the heuristic greedy span selection scheme, allowing us to model the downstream task on an optimal set of spans. We evaluate our model on two popular span prediction tasks: coreference resolution and semantic role labeling; and show improvements on both.

* NAACL 2022 camera-ready 

  Access Paper or Ask Questions

Long Video Generation with Time-Agnostic VQGAN and Time-Sensitive Transformer

Apr 07, 2022
Songwei Ge, Thomas Hayes, Harry Yang, Xi Yin, Guan Pang, David Jacobs, Jia-Bin Huang, Devi Parikh

Videos are created to express emotion, exchange information, and share experiences. Video synthesis has intrigued researchers for a long time. Despite the rapid progress driven by advances in visual synthesis, most existing studies focus on improving the frames' quality and the transitions between them, while little progress has been made in generating longer videos. In this paper, we present a method that builds on 3D-VQGAN and transformers to generate videos with thousands of frames. Our evaluation shows that our model trained on 16-frame video clips from standard benchmarks such as UCF-101, Sky Time-lapse, and Taichi-HD datasets can generate diverse, coherent, and high-quality long videos. We also showcase conditional extensions of our approach for generating meaningful long videos by incorporating temporal information with text and audio. Videos and code can be found at https://songweige.github.io/projects/tats/index.html.


  Access Paper or Ask Questions

What to Hide from Your Students: Attention-Guided Masked Image Modeling

Mar 23, 2022
Ioannis Kakogeorgiou, Spyros Gidaris, Bill Psomas, Yannis Avrithis, Andrei Bursuc, Konstantinos Karantzalos, Nikos Komodakis

Transformers and masked language modeling are quickly being adopted and explored in computer vision as vision transformers and masked image modeling (MIM). In this work, we argue that image token masking is fundamentally different from token masking in text, due to the amount and correlation of tokens in an image. In particular, to generate a challenging pretext task for MIM, we advocate a shift from random masking to informed masking. We develop and exhibit this idea in the context of distillation-based MIM, where a teacher transformer encoder generates an attention map, which we use to guide masking for the student encoder. We thus introduce a novel masking strategy, called attention-guided masking (AttMask), and we demonstrate its effectiveness over random masking for dense distillation-based MIM as well as plain distillation-based self-supervised learning on classification tokens. We confirm that AttMask accelerates the learning process and improves the performance on a variety of downstream tasks.


  Access Paper or Ask Questions

Exploring and Adapting Chinese GPT to Pinyin Input Method

Mar 02, 2022
Minghuan Tan, Yong Dai, Duyu Tang, Zhangyin Feng, Guoping Huang, Jing Jiang, Jiwei Li, Shuming Shi

While GPT has become the de-facto method for text generation tasks, its application to pinyin input method remains unexplored. In this work, we make the first exploration to leverage Chinese GPT for pinyin input method. We find that a frozen GPT achieves state-of-the-art performance on perfect pinyin. However, the performance drops dramatically when the input includes abbreviated pinyin. A reason is that an abbreviated pinyin can be mapped to many perfect pinyin, which links to even larger number of Chinese characters. We mitigate this issue with two strategies, including enriching the context with pinyin and optimizing the training process to help distinguish homophones. To further facilitate the evaluation of pinyin input method, we create a dataset consisting of 270K instances from 15 domains. Results show that our approach improves performance on abbreviated pinyin across all domains. Model analysis demonstrates that both strategies contribute to the performance boost.

* To appear in ACL 2022 

  Access Paper or Ask Questions

<<
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
>>