Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Quantifying Intimacy in Language

Nov 05, 2020
Jiaxin Pei, David Jurgens

Intimacy is a fundamental aspect of how we relate to others in social settings. Language encodes the social information of intimacy through both topics and other more subtle cues (such as linguistic hedging and swearing). Here, we introduce a new computational framework for studying expressions of the intimacy in language with an accompanying dataset and deep learning model for accurately predicting the intimacy level of questions (Pearson's r=0.87). Through analyzing a dataset of 80.5M questions across social media, books, and films, we show that individuals employ interpersonal pragmatic moves in their language to align their intimacy with social settings. Then, in three studies, we further demonstrate how individuals modulate their intimacy to match social norms around gender, social distance, and audience, each validating key findings from studies in social psychology. Our work demonstrates that intimacy is a pervasive and impactful social dimension of language.

* EMNLP 2020 

  Access Paper or Ask Questions

Global Attention for Name Tagging

Oct 19, 2020
Boliang Zhang, Spencer Whitehead, Lifu Huang, Heng Ji

Many name tagging approaches use local contextual information with much success, but fail when the local context is ambiguous or limited. We present a new framework to improve name tagging by utilizing local, document-level, and corpus-level contextual information. We retrieve document-level context from other sentences within the same document and corpus-level context from sentences in other topically related documents. We propose a model that learns to incorporate document-level and corpus-level contextual information alongside local contextual information via global attentions, which dynamically weight their respective contextual information, and gating mechanisms, which determine the influence of this information. Extensive experiments on benchmark datasets show the effectiveness of our approach, which achieves state-of-the-art results for Dutch, German, and Spanish on the CoNLL-2002 and CoNLL-2003 datasets.


  Access Paper or Ask Questions

Lexicon generation for detecting fake news

Oct 16, 2020
Uğur Mertoğlu Burkay Genç

With the digitization of media, an immense amount of news data has been generated by online sources, including mainstream media outlets as well as social networks. However, the ease of production and distribution resulted in circulation of fake news as well as credible, authentic news. The pervasive dissemination of fake news has extreme negative impacts on individuals and society. Therefore, fake news detection has recently become an emerging topic as an interdisciplinary research field that is attracting significant attention from many research disciplines, including social sciences and linguistics. In this study, we propose a method primarily based on lexicons including a scoring system to facilitate the detection of the fake news in Turkish. We contribute to the literature by collecting a novel, large scale, and credible dataset of Turkish news, and by constructing the first fake news detection lexicon for Turkish.


  Access Paper or Ask Questions

Lazy Greedy Hypervolume Subset Selection from Large Candidate Solution Sets

Jul 04, 2020
Weiyu Chen, Hisao Ishibuhci, Ke Shang

Subset selection is a popular topic in recent years and a number of subset selection methods have been proposed. Among those methods, hypervolume subset selection is widely used. Greedy hypervolume subset selection algorithms can achieve good approximations to the optimal subset. However, when the candidate set is large (e.g., an unbounded external archive with a large number of solutions), the algorithm is very time-consuming. In this paper, we propose a new lazy greedy algorithm exploiting the submodular property of the hypervolume indicator. The core idea is to avoid unnecessary hypervolume contribution calculation when finding the solution with the largest contribution. Experimental results show that the proposed algorithm is hundreds of times faster than the original greedy inclusion algorithm and several times faster than the fastest known greedy inclusion algorithm on many test problems.

* Accepted by CEC 2020 

  Access Paper or Ask Questions

Unsupervised Domain Clusters in Pretrained Language Models

May 01, 2020
Roee Aharoni, Yoav Goldberg

The notion of "in-domain data" in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained language models implicitly learn sentence representations that cluster by domains without supervision -- suggesting a simple data-driven definition of domains in textual data. We harness this property and propose domain data selection methods based on such models, which require only a small set of in-domain monolingual data. We evaluate our data selection methods for neural machine translation across five diverse domains, where they outperform an established approach as measured by both BLEU and by precision and recall of sentence selection with respect to an oracle.

* Accepted as a long paper in ACL 2020 

  Access Paper or Ask Questions

Recent Advances and Challenges in Task-oriented Dialog System

Mar 19, 2020
Zheng Zhang, Ryuichi Takanobu, Minlie Huang, Xiaoyan Zhu

Due to the significance and value in human-computer interaction and natural language processing, task-oriented dialog systems are attracting more and more attention in both academic and industrial communities. In this paper, we survey recent advances and challenges in an issue-specific manner. We discuss three critical topics for task-oriented dialog systems: (1) improving data efficiency to facilitate dialog system modeling in low-resource settings, (2) modeling multi-turn dynamics for dialog policy learning to achieve better task-completion performance, and (3) integrating domain ontology knowledge into the dialog model in both pipeline and end-to-end models. We also review the recent progresses in dialog evaluation and some widely-used corpora. We believe that this survey can shed a light on future research in task-oriented dialog systems.

* Under review of SCIENCE CHINA Technological Science 

  Access Paper or Ask Questions

TPLVM: Portfolio Construction by Student's $t$-process Latent Variable Model

Jan 29, 2020
Yusuke Uchiyama, Kei Nakagawa

Optimal asset allocation is a key topic in modern finance theory. To realize the optimal asset allocation on investor's risk aversion, various portfolio construction methods have been proposed. Recently, the applications of machine learning are rapidly growing in the area of finance. In this article, we propose the Student's $t$-process latent variable model (TPLVM) to describe non-Gaussian fluctuations of financial timeseries by lower dimensional latent variables. Subsequently, we apply the TPLVM to minimum-variance portfolio as an alternative of existing nonlinear factor models. To test the performance of the proposed portfolio, we construct minimum-variance portfolios of global stock market indices based on the TPLVM or Gaussian process latent variable model. By comparing these portfolios, we confirm the proposed portfolio outperforms that of the existing Gaussian process latent variable model.


  Access Paper or Ask Questions

Distraction-Aware Feature Learning for Human Attribute Recognition via Coarse-to-Fine Attention Mechanism

Nov 26, 2019
Mingda Wu, Di Huang, Yuanfang Guo, Yunhong Wang

Recently, Human Attribute Recognition (HAR) has become a hot topic due to its scientific challenges and application potentials, where localizing attributes is a crucial stage but not well handled. In this paper, we propose a novel deep learning approach to HAR, namely Distraction-aware HAR (Da-HAR). It enhances deep CNN feature learning by improving attribute localization through a coarse-to-fine attention mechanism. At the coarse step, a self-mask block is built to roughly discriminate and reduce distractions, while at the fine step, a masked attention branch is applied to further eliminate irrelevant regions. Thanks to this mechanism, feature learning is more accurate, especially when heavy occlusions and complex backgrounds exist. Extensive experiments are conducted on the WIDER-Attribute and RAP databases, and state-of-the-art results are achieved, demonstrating the effectiveness of the proposed approach.

* 8 pages, 5 figures, accepted by AAAI-20 as an oral presentation 

  Access Paper or Ask Questions

Increasing Expressivity of a Hyperspherical VAE

Oct 07, 2019
Tim R. Davidson, Jakub M. Tomczak, Efstratios Gavves

Learning suitable latent representations for observed, high-dimensional data is an important research topic underlying many recent advances in machine learning. While traditionally the Gaussian normal distribution has been the go-to latent parameterization, recently a variety of works have successfully proposed the use of manifold-valued latents. In one such work (Davidson et al., 2018), the authors empirically show the potential benefits of using a hyperspherical von Mises-Fisher (vMF) distribution in low dimensionality. However, due to the unique distributional form of the vMF, expressivity in higher dimensional space is limited as a result of its scalar concentration parameter leading to a 'hyperspherical bottleneck'. In this work we propose to extend the usability of hyperspherical parameterizations to higher dimensions using a product-space instead, showing improved results on a selection of image datasets.

* NeurIPS 2019, in Workshop on Bayesian Deep Learning 

  Access Paper or Ask Questions

<<
280
281
282
283
284
285
286
287
288
289
290
291
292
>>