Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Joint Image-Text News Topic Detection and Tracking with And-Or Graph Representation

Dec 15, 2015
Weixin Li, Jungseock Joo, Hang Qi, Song-Chun Zhu

In this paper, we aim to develop a method for automatically detecting and tracking topics in broadcast news. We present a hierarchical And-Or graph (AOG) to jointly represent the latent structure of both texts and visuals. The AOG embeds a context sensitive grammar that can describe the hierarchical composition of news topics by semantic elements about people involved, related places and what happened, and model contextual relationships between elements in the hierarchy. We detect news topics through a cluster sampling process which groups stories about closely related events. Swendsen-Wang Cuts (SWC), an effective cluster sampling algorithm, is adopted for traversing the solution space and obtaining optimal clustering solutions by maximizing a Bayesian posterior probability. Topics are tracked to deal with the continuously updated news streams. We generate topic trajectories to show how topics emerge, evolve and disappear over time. The experimental results show that our method can explicitly describe the textual and visual data in news videos and produce meaningful topic trajectories. Our method achieves superior performance compared to state-of-the-art methods on both a public dataset Reuters-21578 and a self-collected dataset named UCLA Broadcast News Dataset.

  Access Paper or Ask Questions

Investigating Diffusion of Scientific Knowledge on Twitter: A Study of Topic Networks of Opioid Publications

Jan 27, 2021
Robin Haunschild, Lutz Bornmann, Devendra Potnis, Iman Tahamtan

One way to assess the value of scientific research is to measure the attention it receives on social media. While previous research has mostly focused on the "number of mentions" of scientific research on social media, the current study uses "topic networks" to measure public attention to scientific research on Twitter. Topic networks in this research are the "co-occurring author keywords" in scientific publications and the "co-occurring hashtags" in the tweets mentioning scientific publications. Since bots (automated social media accounts) may significantly influence public attention, this study also investigates whether the topic networks based on the tweets by all accounts (bot and non-bot accounts) differ from the topic networks by non-bot accounts. Our analysis is based on a set of opioid scientific publications from 2011 to 2019 and the tweets associated with them. We use co-occurrence network analysis to generate topic networks. Results indicated that the public has mostly used generic terms to discuss opioid publications. Results confirmed that topic networks provide a legitimate method to visualize public discussions of (health-related) scientific publications, and how the public discusses (health-related) scientific research differently from the scientific community. There was a significant overlap between the topic networks based on the tweets by all accounts and non-bot accounts. This result indicates that in generating topic networks, bot accounts do not need to be excluded as they have negligible impact on the results.

  Access Paper or Ask Questions

A Poisson convolution model for characterizing topical content with word frequency and exclusivity

Jul 28, 2014
Edoardo M Airoldi, Jonathan M Bischof

An ongoing challenge in the analysis of document collections is how to summarize content in terms of a set of inferred themes that can be interpreted substantively in terms of topics. The current practice of parametrizing the themes in terms of most frequent words limits interpretability by ignoring the differential use of words across topics. We argue that words that are both common and exclusive to a theme are more effective at characterizing topical content. We consider a setting where professional editors have annotated documents to a collection of topic categories, organized into a tree, in which leaf-nodes correspond to the most specific topics. Each document is annotated to multiple categories, at different levels of the tree. We introduce a hierarchical Poisson convolution model to analyze annotated documents in this setting. The model leverages the structure among categories defined by professional editors to infer a clear semantic description for each topic in terms of words that are both frequent and exclusive. We carry out a large randomized experiment on Amazon Turk to demonstrate that topic summaries based on the FREX score are more interpretable than currently established frequency based summaries, and that the proposed model produces more efficient estimates of exclusivity than with currently models. We also develop a parallelized Hamiltonian Monte Carlo sampler that allows the inference to scale to millions of documents.

* Originally appeared in ICML2012 

  Access Paper or Ask Questions

Topic Segmentation and Labeling in Asynchronous Conversations

Feb 04, 2014
Shafiq Rayhan Joty, Giuseppe Carenini, Raymond T Ng

Topic segmentation and labeling is often considered a prerequisite for higher-level conversation analysis and has been shown to be useful in many Natural Language Processing (NLP) applications. We present two new corpora of email and blog conversations annotated with topics, and evaluate annotator reliability for the segmentation and labeling tasks in these asynchronous conversations. We propose a complete computational framework for topic segmentation and labeling in asynchronous conversations. Our approach extends state-of-the-art methods by considering a fine-grained structure of an asynchronous conversation, along with other conversational features by applying recent graph-based methods for NLP. For topic segmentation, we propose two novel unsupervised models that exploit the fine-grained conversational structure, and a novel graph-theoretic supervised model that combines lexical, conversational and topic features. For topic labeling, we propose two novel (unsupervised) random walk models that respectively capture conversation specific clues from two different sources: the leading sentences and the fine-grained conversational structure. Empirical evaluation shows that the segmentation and the labeling performed by our best models beat the state-of-the-art, and are highly correlated with human annotations.

* Journal Of Artificial Intelligence Research, Volume 47, pages 521-573, 2013 

  Access Paper or Ask Questions

Minimum Volume Topic Modeling

Apr 03, 2019
Byoungwook Jang, Alfred Hero

We propose a new topic modeling procedure that takes advantage of the fact that the Latent Dirichlet Allocation (LDA) log likelihood function is asymptotically equivalent to the logarithm of the volume of the topic simplex. This allows topic modeling to be reformulated as finding the probability simplex that minimizes its volume and encloses the documents that are represented as distributions over words. A convex relaxation of the minimum volume topic model optimization is proposed, and it is shown that the relaxed problem has the same global minimum as the original problem under the separability assumption and the sufficiently scattered assumption introduced by Arora et al. (2013) and Huang et al. (2016). A locally convergent alternating direction method of multipliers (ADMM) approach is introduced for solving the relaxed minimum volume problem. Numerical experiments illustrate the benefits of our approach in terms of computation time and topic recovery performance.

* Accepted in AISTATS 2019 

  Access Paper or Ask Questions

Modeling Topical Coherence in Discourse without Supervision

Sep 02, 2018
Disha Shrivastava, Abhijit Mishra, Karthik Sankaranarayanan

Coherence of text is an important attribute to be measured for both manually and automatically generated discourse; but well-defined quantitative metrics for it are still elusive. In this paper, we present a metric for scoring topical coherence of an input paragraph on a real-valued scale by analyzing its underlying topical structure. We first extract all possible topics that the sentences of a paragraph of text are related to. Coherence of this text is then measured by computing: (a) the degree of uncertainty of the topics with respect to the paragraph, and (b) the relatedness between these topics. All components of our modular framework rely only on unlabeled data and WordNet, thus making it completely unsupervised, which is an important feature for general-purpose usage of any metric. Experiments are conducted on two datasets - a publicly available dataset for essay grading (representing human discourse), and a synthetic dataset constructed by mixing content from multiple paragraphs covering diverse topics. Our evaluation shows that the measured coherence scores are positively correlated with the ground truth for both the datasets. Further validation to our coherence scores is provided by conducting human evaluation on the synthetic data, showing a significant agreement of 79.3%

* 9 pages 

  Access Paper or Ask Questions

One Configuration to Rule Them All? Towards Hyperparameter Transfer in Topic Models using Multi-Objective Bayesian Optimization

Feb 15, 2022
Silvia Terragni, Ismail Harrando, Pasquale Lisena, Raphael Troncy, Elisabetta Fersini

Topic models are statistical methods that extract underlying topics from document collections. When performing topic modeling, a user usually desires topics that are coherent, diverse between each other, and that constitute good document representations for downstream tasks (e.g. document classification). In this paper, we conduct a multi-objective hyperparameter optimization of three well-known topic models. The obtained results reveal the conflicting nature of different objectives and that the training corpus characteristics are crucial for the hyperparameter selection, suggesting that it is possible to transfer the optimal hyperparameter configurations between datasets.

* 17 pages, 7 figures 

  Access Paper or Ask Questions

"Draw My Topics": Find Desired Topics fast from large scale of Corpus

Feb 03, 2016
Jason Dou, Ni Sun, Xiaojun Zou

We develop the "Draw My Topics" toolkit, which provides a fast way to incorporate social scientists' interest into standard topic modelling. Instead of using raw corpus with primitive processing as input, an algorithm based on Vector Space Model and Conditional Entropy are used to connect social scientists' willingness and unsupervised topic models' output. Space for users' adjustment on specific corpus of their interest is also accommodated. We demonstrate the toolkit's use on the Diachronic People's Daily Corpus in Chinese.

  Access Paper or Ask Questions

Contrastive Learning for Neural Topic Model

Oct 25, 2021
Thong Nguyen, Anh Tuan Luu

Recent empirical studies show that adversarial topic models (ATM) can successfully capture semantic patterns of the document by differentiating a document with another dissimilar sample. However, utilizing that discriminative-generative architecture has two important drawbacks: (1) the architecture does not relate similar documents, which has the same document-word distribution of salient words; (2) it restricts the ability to integrate external information, such as sentiments of the document, which has been shown to benefit the training of neural topic model. To address those issues, we revisit the adversarial topic architecture in the viewpoint of mathematical analysis, propose a novel approach to re-formulate discriminative goal as an optimization problem, and design a novel sampling method which facilitates the integration of external variables. The reformulation encourages the model to incorporate the relations among similar samples and enforces the constraint on the similarity among dissimilar ones; while the sampling method, which is based on the internal input and reconstructed output, helps inform the model of salient words contributing to the main topic. Experimental results show that our framework outperforms other state-of-the-art neural topic models in three common benchmark datasets that belong to various domains, vocabulary sizes, and document lengths in terms of topic coherence.

* 15 pages, 8 tables, 5 figures, published at Advances in Neural Information Processing Systems (NeurIPS), 2021 

  Access Paper or Ask Questions

Quantifying the dynamics of topical fluctuations in language

Jun 13, 2018
Andres Karjus, Richard A. Blythe, Simon Kirby, Kenny Smith

The availability of large diachronic corpora has provided the impetus for a growing body of quantitative research on language evolution and meaning change. The central quantities in this research are token frequencies of linguistic elements in the texts, with changes in frequency taken to reflect the popularity or selective fitness of an element. However, corpus frequencies may change for a wide variety of reasons, including purely random sampling effects, or because corpora are composed of contemporary media and fiction texts within which the underlying topics ebb and flow with cultural and socio-political trends. In this work, we introduce a computationally simple model for controlling for topical fluctuations in corpora - the topical-cultural advection model - and demonstrate how it provides a robust baseline of variability in word frequency changes over time. We validate the model on a diachronic corpus spanning two centuries, and a carefully-controlled artificial language change scenario, and then use it to correct for topical fluctuations in historical time series. Finally, we show that the model can be used to show that emergence of new words typically corresponds with the rise of a trending topic. This suggests that some lexical innovations occur due to growing communicative need in a subspace of the lexicon, and that the topical-cultural advection model can be used to quantify this.

* Code to run the analyses described in this paper is now available at . A previous shorter version of this paper outlining the basic model appeared as an extended abstract in the proceedings of the Society for Computation in Linguistics (Karjus et al. 2018, Topical advection as a baseline model for corpus-based lexical dynamics) 

  Access Paper or Ask Questions