Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Volume-based Semantic Labeling with Signed Distance Functions

Nov 13, 2015
Tommaso Cavallari, Luigi Di Stefano

Research works on the two topics of Semantic Segmentation and SLAM (Simultaneous Localization and Mapping) have been following separate tracks. Here, we link them quite tightly by delineating a category label fusion technique that allows for embedding semantic information into the dense map created by a volume-based SLAM algorithm such as KinectFusion. Accordingly, our approach is the first to provide a semantically labeled dense reconstruction of the environment from a stream of RGB-D images. We validate our proposal using a publicly available semantically annotated RGB-D dataset and a) employing ground truth labels, b) corrupting such annotations with synthetic noise, c) deploying a state of the art semantic segmentation algorithm based on Convolutional Neural Networks.

* Submitted to PSIVT2015 

  Access Paper or Ask Questions

An evaluation of keyword extraction from online communication for the characterisation of social relations

Feb 11, 2014
Jan Hauffa, Tobias Lichtenberg, Georg Groh

The set of interpersonal relationships on a social network service or a similar online community is usually highly heterogenous. The concept of tie strength captures only one aspect of this heterogeneity. Since the unstructured text content of online communication artefacts is a salient source of information about a social relationship, we investigate the utility of keywords extracted from the message body as a representation of the relationship's characteristics as reflected by the conversation topics. Keyword extraction is performed using standard natural language processing methods. Communication data and human assessments of the extracted keywords are obtained from Facebook users via a custom application. The overall positive quality assessment provides evidence that the keywords indeed convey relevant information about the relationship.


  Access Paper or Ask Questions

Axiomatic properties of inconsistency indices for pairwise comparisons

Jun 28, 2013
Matteo Brunelli, Michele Fedrizzi

Pairwise comparisons are a well-known method for the representation of the subjective preferences of a decision maker. Evaluating their inconsistency has been a widely studied and discussed topic and several indices have been proposed in the literature to perform this task. Since an acceptable level of consistency is closely related with the reliability of preferences, a suitable choice of an inconsistency index is a crucial phase in decision making processes. The use of different methods for measuring consistency must be carefully evaluated, as it can affect the decision outcome in practical applications. In this paper, we present five axioms aimed at characterizing inconsistency indices. In addition, we prove that some of the indices proposed in the literature satisfy these axioms, while others do not, and therefore, in our view, they may fail to correctly evaluate inconsistency.

* Journal of the Operational Research Society, 66(1), 1-15, (2015) 
* 25 pages, 3 figures 

  Access Paper or Ask Questions

Not As Easy As It Seems: Automating the Construction of Lexical Chains Using Roget's Thesaurus

Apr 01, 2012
Mario Jarmasz, Stan Szpakowicz

Morris and Hirst present a method of linking significant words that are about the same topic. The resulting lexical chains are a means of identifying cohesive regions in a text, with applications in many natural language processing tasks, including text summarization. The first lexical chains were constructed manually using Roget's International Thesaurus. Morris and Hirst wrote that automation would be straightforward given an electronic thesaurus. All applications so far have used WordNet to produce lexical chains, perhaps because adequate electronic versions of Roget's were not available until recently. We discuss the building of lexical chains using an electronic version of Roget's Thesaurus. We implement a variant of the original algorithm, and explain the necessary design decisions. We include a comparison with other implementations.

* Proceedings of the 16th Canadian Conference on Artificial Intelligence (AI 2003), Halifax, Canada, June 2003. Lecture Notes in Computer Science 2671, Springer-Verlag 2003, 544-549 
* 5 pages 

  Access Paper or Ask Questions

Tree-Structured Stick Breaking Processes for Hierarchical Data

Jun 05, 2010
Ryan Prescott Adams, Zoubin Ghahramani, Michael I. Jordan

Many data are naturally modeled by an unobserved hierarchical structure. In this paper we propose a flexible nonparametric prior over unknown data hierarchies. The approach uses nested stick-breaking processes to allow for trees of unbounded width and depth, where data can live at any node and are infinitely exchangeable. One can view our model as providing infinite mixtures where the components have a dependency structure corresponding to an evolutionary diffusion down a tree. By using a stick-breaking approach, we can apply Markov chain Monte Carlo methods based on slice sampling to perform Bayesian inference and simulate from the posterior distribution on trees. We apply our method to hierarchical clustering of images and topic modeling of text data.

* 16 pages, 5 figures, submitted 

  Access Paper or Ask Questions

Revisiting Evolutionary Algorithms with On-the-Fly Population Size Adjustment

Feb 15, 2006
Fernando G. Lobo, Claudio F. Lima

In an evolutionary algorithm, the population has a very important role as its size has direct implications regarding solution quality, speed, and reliability. Theoretical studies have been done in the past to investigate the role of population sizing in evolutionary algorithms. In addition to those studies, several self-adjusting population sizing mechanisms have been proposed in the literature. This paper revisits the latter topic and pays special attention to the genetic algorithm with adaptive population size (APGA), for which several researchers have claimed to be very effective at autonomously (re)sizing the population. As opposed to those previous claims, this paper suggests a complete opposite view. Specifically, it shows that APGA is not capable of adapting the population size at all. This claim is supported on theoretical grounds and confirmed by computer simulations.

* Also UALG-ILAB Report No. 200602 

  Access Paper or Ask Questions

Centering in Japanese Discourse

Sep 24, 1996
Marilyn Walker, Masayo Iida, Sharon Cote

In this paper we propose a computational treatment of the resolution of zero pronouns in Japanese discourse, using an adaptation of the centering algorithm. We are able to factor language-specific dependencies into one parameter of the centering algorithm. Previous analyses have stipulated that a zero pronoun and its cospecifier must share a grammatical function property such as {\sc Subject} or {\sc NonSubject}. We show that this property-sharing stipulation is unneeded. In addition we propose the notion of {\sc topic ambiguity} within the centering framework, which predicts some ambiguities that occur in Japanese discourse. This analysis has implications for the design of language-independent discourse modules for Natural Language systems. The centering algorithm has been implemented in an HPSG Natural Language system with both English and Japanese grammars.

* COLING90: Proceedings 13th International Conference on Computational Linguistics, Helsinki 
* 7 pages, uses twocolumn 

  Access Paper or Ask Questions

Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations

Feb 14, 2022
Anna Hedström, Leander Weber, Dilyara Bareeva, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne

The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness. Until now, no tool exists that exhaustively and speedily allows researchers to quantitatively evaluate explanations of neural network predictions. To increase transparency and reproducibility in the field, we therefore built Quantus - a comprehensive, open-source toolkit in Python that includes a growing, well-organised collection of evaluation metrics and tutorials for evaluating explainable methods. The toolkit has been thoroughly tested and is available under open source license on PyPi (or on https://github.com/understandable-machine-intelligence-lab/quantus/).

* 4 pages, 1 figure, 1 table 

  Access Paper or Ask Questions

Max and Coincidence Neurons in Neural Networks

Oct 04, 2021
Albert Lee, Kang L. Wang

Network design has been a central topic in machine learning. Large amounts of effort have been devoted towards creating efficient architectures through manual exploration as well as automated neural architecture search. However, todays architectures have yet to consider the diversity of neurons and the existence of neurons with specific processing functions. In this work, we optimize networks containing models of the max and coincidence neurons using neural architecture search, and analyze the structure, operations, and neurons of optimized networks to develop a signal-processing ResNet. The developed network achieves an average of 2% improvement in accuracy and a 25% improvement in network size across a variety of datasets, demonstrating the importance of neuronal functions in creating compact, efficient networks.


  Access Paper or Ask Questions

Stain-Robust Mitotic Figure Detection for the Mitosis Domain Generalization Challenge

Sep 29, 2021
Mostafa Jahanifar, Adam Shephard, Neda Zamani Tajeddin, R. M. Saad Bashir, Mohsin Bilal, Syed Ali Khurram, Fayyaz Minhas, Nasir Rajpoot

The detection of mitotic figures from different scanners/sites remains an important topic of research, owing to its potential in assisting clinicians with tumour grading. The MItosis DOmain Generalization (MIDOG) challenge aims to test the robustness of detection models on unseen data from multiple scanners for this task. We present a short summary of the approach employed by the TIA Centre team to address this challenge. Our approach is based on a hybrid detection model, where mitotic candidates are segmented on stain normalised images, before being refined by a deep learning classifier. Cross-validation on the training images achieved the F1-score of 0.786 and 0.765 on the preliminary test set, demonstrating the generalizability of our model to unseen data from new scanners.

* MIDOG challenge at MICCAI 2021 

  Access Paper or Ask Questions

<<
229
230
231
232
233
234
235
236
237
238
239
240
241
>>