Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Weakly-Supervised Methods for Suicide Risk Assessment: Role of Related Domains

Jun 05, 2021
Chenghao Yang, Yudong Zhang, Smaranda Muresan

Social media has become a valuable resource for the study of suicidal ideation and the assessment of suicide risk. Among social media platforms, Reddit has emerged as the most promising one due to its anonymity and its focus on topic-based communities (subreddits) that can be indicative of someone's state of mind or interest regarding mental health disorders such as r/SuicideWatch, r/Anxiety, r/depression. A challenge for previous work on suicide risk assessment has been the small amount of labeled data. We propose an empirical investigation into several classes of weakly-supervised approaches, and show that using pseudo-labeling based on related issues around mental health (e.g., anxiety, depression) helps improve model performance for suicide risk assessment.

* ACL 2021 short paper. Code is available at https://github.com/yangalan123/WM-SRA (under construction) 

  Access Paper or Ask Questions

Review on Indoor RGB-D Semantic Segmentation with Deep Convolutional Neural Networks

May 25, 2021
Sami Barchid, José Mennesson, Chaabane Djéraba

Many research works focus on leveraging the complementary geometric information of indoor depth sensors in vision tasks performed by deep convolutional neural networks, notably semantic segmentation. These works deal with a specific vision task known as "RGB-D Indoor Semantic Segmentation". The challenges and resulting solutions of this task differ from its standard RGB counterpart. This results in a new active research topic. The objective of this paper is to introduce the field of Deep Convolutional Neural Networks for RGB-D Indoor Semantic Segmentation. This review presents the most popular public datasets, proposes a categorization of the strategies employed by recent contributions, evaluates the performance of the current state-of-the-art, and discusses the remaining challenges and promising directions for future works.


  Access Paper or Ask Questions

Intent detection and slot filling for Vietnamese

Apr 05, 2021
Mai Hoang Dao, Thinh Hung Truong, Dat Quoc Nguyen

Intent detection and slot filling are important tasks in spoken and natural language understanding. However, Vietnamese is a low-resource language in these research topics. In this paper, we present the first public intent detection and slot filling dataset for Vietnamese. In addition, we also propose a joint model for intent detection and slot filling, that extends the recent state-of-the-art JointBERT+CRF model with an intent-slot attention layer in order to explicitly incorporate intent context information into slot filling via "soft" intent label embedding. Experimental results on our Vietnamese dataset show that our proposed model significantly outperforms JointBERT+CRF. We publicly release our dataset and the implementation of our model at: https://github.com/VinAIResearch/JointIDSF


  Access Paper or Ask Questions

Estimating California's Solar and Wind Energy Production using Computer Vision Deep Learning Techniques on Weather Images

Mar 15, 2021
Sebastian Bosma, Negar Nazari

In pursuit of a novel forecasting strategy for the energy market, we propose a ResNet-inspired model which estimates solar and wind energy production using weather images. The model is designed to capture high-frequency details while producing realistically smooth energy production profiles. To this end, we show the value of including multiple weather images from times preceding the estimation time, and demonstrate that the model outperforms traditional deep learning techniques and alternative state-of-the-art computer vision methods. Training and testing are performed on a novel data set that focuses on the state of California and spans the year 2019. The dataset, which is sourced from NOAA and CAISO, is a secondary contribution of this work. Finally, multiple topics in line with the motivation are proposed for future work.


  Access Paper or Ask Questions

Automatic Metaphor Interpretation Using Word Embeddings

Oct 06, 2020
Kfir Bar, Nachum Dershowitz, Lena Dankin

We suggest a model for metaphor interpretation using word embeddings trained over a relatively large corpus. Our system handles nominal metaphors, like "time is money". It generates a ranked list of potential interpretations of given metaphors. Candidate meanings are drawn from collocations of the topic ("time") and vehicle ("money") components, automatically extracted from a dependency-parsed corpus. We explore adding candidates derived from word association norms (common human responses to cues). Our ranking procedure considers similarity between candidate interpretations and metaphor components, measured in a semantic vector space. Lastly, a clustering algorithm removes semantically related duplicates, thereby allowing other candidate interpretations to attain higher rank. We evaluate using a set of annotated metaphors.

* Presented at 19th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing), 2018 

  Access Paper or Ask Questions

An Empirical Study on Neural Keyphrase Generation

Sep 22, 2020
Rui Meng, Xingdi Yuan, Tong Wang, Sanqiang Zhao, Adam Trischler, Daqing He

Recent years have seen a flourishing of neural keyphrase generation works, including the release of several large-scale datasets and a host of new models to tackle them. Model performance on keyphrase generation tasks has increased significantly with evolving deep learning research. However, there lacks a comprehensive comparison among models, and an investigation on related factors (e.g., architectural choice, decoding strategy) that may affect a keyphrase generation system's performance. In this empirical study, we aim to fill this gap by providing extensive experimental results and analyzing the most crucial factors impacting the performance of keyphrase generation models. We hope this study can help clarify some of the uncertainties surrounding the keyphrase generation task and facilitate future research on this topic.


  Access Paper or Ask Questions

Experiments in Extractive Summarization: Integer Linear Programming, Term/Sentence Scoring, and Title-driven Models

Aug 01, 2020
Daniel Lee, Rakesh Verma, Avisha Das, Arjun Mukherjee

In this paper, we revisit the challenging problem of unsupervised single-document summarization and study the following aspects: Integer linear programming (ILP) based algorithms, Parameterized normalization of term and sentence scores, and Title-driven approaches for summarization. We describe a new framework, NewsSumm, that includes many existing and new approaches for summarization including ILP and title-driven approaches. NewsSumm's flexibility allows to combine different algorithms and sentence scoring schemes seamlessly. Our results combining sentence scoring with ILP and normalization are in contrast to previous work on this topic, showing the importance of a broader search for optimal parameters. We also show that the new title-driven reduction idea leads to improvement in performance for both unsupervised and supervised approaches considered.


  Access Paper or Ask Questions

Staying True to Your Word: (How) Can Attention Become Explanation?

May 19, 2020
Martin Tutek, Jan Šnajder

The attention mechanism has quickly become ubiquitous in NLP. In addition to improving performance of models, attention has been widely used as a glimpse into the inner workings of NLP models. The latter aspect has in the recent years become a common topic of discussion, most notably in work of Jain and Wallace, 2019; Wiegreffe and Pinter, 2019. With the shortcomings of using attention weights as a tool of transparency revealed, the attention mechanism has been stuck in a limbo without concrete proof when and whether it can be used as an explanation. In this paper, we provide an explanation as to why attention has seen rightful critique when used with recurrent networks in sequence classification tasks. We propose a remedy to these issues in the form of a word level objective and our findings give credibility for attention to provide faithful interpretations of recurrent models.


  Access Paper or Ask Questions

From Standard Summarization to New Tasks and Beyond: Summarization with Manifold Information

May 10, 2020
Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, Rui Yan

Text summarization is the research area aiming at creating a short and condensed version of the original document, which conveys the main idea of the document in a few words. This research topic has started to attract the attention of a large community of researchers, and it is nowadays counted as one of the most promising research areas. In general, text summarization algorithms aim at using a plain text document as input and then output a summary. However, in real-world applications, most of the data is not in a plain text format. Instead, there is much manifold information to be summarized, such as the summary for a web page based on a query in the search engine, extreme long document (e.g., academic paper), dialog history and so on. In this paper, we focus on the survey of these new summarization tasks and approaches in the real-world application.

* Accepted by IJCAI 2020 Survey Track 

  Access Paper or Ask Questions

NODIS: Neural Ordinary Differential Scene Understanding

Jan 14, 2020
Cong Yuren, Hanno Ackermann, Wentong Liao, Michael Ying Yang, Bodo Rosenhahn

Semantic image understanding is a challenging topic in computer vision. It requires to detect all objects in an image, but also to identify all the relations between them. Detected objects, their labels and the discovered relations can be used to construct a scene graph which provides an abstract semantic interpretation of an image. In previous works, relations were identified by solving an assignment problem formulated as Mixed-Integer Linear Programs. In this work, we interpret that formulation as Ordinary Differential Equation (ODE). The proposed architecture performs scene graph inference by solving a neural variant of an ODE by end-to-end learning. It achieves state-of-the-art results on all three benchmark tasks: scene graph generation (SGGen), classification (SGCls) and visual relationship detection (PredCls) on Visual Genome benchmark.


  Access Paper or Ask Questions

<<
244
245
246
247
248
249
250
251
252
253
254
255
256
>>