Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Automatic Extraction of Urban Outdoor Perception from Geolocated Free-Texts

Oct 13, 2020
Frances Santos, Thiago H Silva, Antonio A F Loureiro, Leandro Villas

The automatic extraction of urban perception shared by people on location-based social networks (LBSNs) is an important multidisciplinary research goal. One of the reasons is because it facilitates the understanding of the intrinsic characteristics of urban areas in a scalable way, helping to leverage new services. However, content shared on LBSNs is diverse, encompassing several topics, such as politics, sports, culture, religion, and urban perceptions, making the task of content extraction regarding a particular topic very challenging. Considering free-text messages shared on LBSNs, we propose an automatic and generic approach to extract people's perceptions. For that, our approach explores opinions that are spatial-temporal and semantically similar. We exemplify our approach in the context of urban outdoor areas in Chicago, New York City and London. Studying those areas, we found evidence that LBSN data brings valuable information about urban regions. To analyze and validate our outcomes, we conducted a temporal analysis to measure the results' robustness over time. We show that our approach can be helpful to better understand urban areas considering different perspectives. We also conducted a comparative analysis based on a public dataset, which contains volunteers' perceptions regarding urban areas expressed in a controlled experiment. We observe that both results yield a very similar level of agreement.

* Paper accepted - to be published 

  Access Paper or Ask Questions

DFEW: A Large-Scale Database for Recognizing Dynamic Facial Expressions in the Wild

Aug 13, 2020
Xingxun Jiang, Yuan Zong, Wenming Zheng, Chuangao Tang, Wanchuang Xia, Cheng Lu, Jiateng Liu

Recently, facial expression recognition (FER) in the wild has gained a lot of researchers' attention because it is a valuable topic to enable the FER techniques to move from the laboratory to the real applications. In this paper, we focus on this challenging but interesting topic and make contributions from three aspects. First, we present a new large-scale 'in-the-wild' dynamic facial expression database, DFEW (Dynamic Facial Expression in the Wild), consisting of over 16,000 video clips from thousands of movies. These video clips contain various challenging interferences in practical scenarios such as extreme illumination, occlusions, and capricious pose changes. Second, we propose a novel method called Expression-Clustered Spatiotemporal Feature Learning (EC-STFL) framework to deal with dynamic FER in the wild. Third, we conduct extensive benchmark experiments on DFEW using a lot of spatiotemporal deep feature learning methods as well as our proposed EC-STFL. Experimental results show that DFEW is a well-designed and challenging database, and the proposed EC-STFL can promisingly improve the performance of existing spatiotemporal deep neural networks in coping with the problem of dynamic FER in the wild. Our DFEW database is publicly available and can be freely downloaded from

  Access Paper or Ask Questions

Embeddings-Based Clustering for Target Specific Stances: The Case of a Polarized Turkey

May 19, 2020
Ammar Rashed, Mucahid Kutlu, Kareem Darwish, Tamer Elsayed, Cansın Bayrak

On June 24, 2018, Turkey conducted a highly consequential election in which the Turkish people elected their president and parliament in the first election under a new presidential system. During the election period, the Turkish people extensively shared their political opinions on Twitter. One aspect of polarization among the electorate was support for or opposition to the reelection of Recep Tayyip Erdo\u{g}an. In this paper, we present an unsupervised method for target-specific stance detection in a polarized setting, specifically Turkish politics, achieving 90% precision in identifying user stances, while maintaining more than 80% recall. The method involves representing users in an embedding space using Google's Convolutional Neural Network (CNN) based multilingual universal sentence encoder. The representations are then projected onto a lower dimensional space in a manner that reflects similarities and are consequently clustered. We show the effectiveness of our method in properly clustering users of divergent groups across multiple targets that include political figures, different groups, and parties. We perform our analysis on a large dataset of 108M Turkish election-related tweets along with the timeline tweets of 168k Turkish users, who authored 213M tweets. Given the resultant user stances, we are able to observe correlations between topics and compute topic polarization.

* arXiv admin note: text overlap with arXiv:1909.10213 

  Access Paper or Ask Questions

A Unified Multi-task Learning Framework for Multi-goal Conversational Recommender Systems

Apr 14, 2022
Yang Deng, Wenxuan Zhang, Weiwen Xu, Wenqiang Lei, Tat-Seng Chua, Wai Lam

Recent years witnessed several advances in developing multi-goal conversational recommender systems (MG-CRS) that can proactively attract users' interests and naturally lead user-engaged dialogues with multiple conversational goals and diverse topics. Four tasks are often involved in MG-CRS, including Goal Planning, Topic Prediction, Item Recommendation, and Response Generation. Most existing studies address only some of these tasks. To handle the whole problem of MG-CRS, modularized frameworks are adopted where each task is tackled independently without considering their interdependencies. In this work, we propose a novel Unified MultI-goal conversational recommeNDer system, namely UniMIND. In specific, we unify these four tasks with different formulations into the same sequence-to-sequence (Seq2Seq) paradigm. Prompt-based learning strategies are investigated to endow the unified model with the capability of multi-task learning. Finally, the overall learning and inference procedure consists of three stages, including multi-task learning, prompt-based tuning, and inference. Experimental results on two MG-CRS benchmarks (DuRecDial and TG-ReDial) show that UniMIND achieves state-of-the-art performance on all tasks with a unified model. Extensive analyses and discussions are provided for shedding some new perspectives for MG-CRS.

  Access Paper or Ask Questions

Heterogeneous Ensemble for ESG Ratings Prediction

Sep 21, 2021
Tim Krappel, Alex Bogun, Damian Borth

Over the past years, topics ranging from climate change to human rights have seen increasing importance for investment decisions. Hence, investors (asset managers and asset owners) who wanted to incorporate these issues started to assess companies based on how they handle such topics. For this assessment, investors rely on specialized rating agencies that issue ratings along the environmental, social and governance (ESG) dimensions. Such ratings allow them to make investment decisions in favor of sustainability. However, rating agencies base their analysis on subjective assessment of sustainability reports, not provided by every company. Furthermore, due to human labor involved, rating agencies are currently facing the challenge to scale up the coverage in a timely manner. In order to alleviate these challenges and contribute to the overall goal of supporting sustainability, we propose a heterogeneous ensemble model to predict ESG ratings using fundamental data. This model is based on feedforward neural network, CatBoost and XGBoost ensemble members. Given the public availability of fundamental data, the proposed method would allow cost-efficient and scalable creation of initial ESG ratings (also for companies without sustainability reporting). Using our approach we are able to explain 54% of the variation in ratings R2 using fundamental data and outperform prior work in this area.

* Accepted to KDD Workshop on Machine Learning in Finance 2021 

  Access Paper or Ask Questions

Cross-sentence Neural Language Models for Conversational Speech Recognition

Jul 08, 2021
Shih-Hsuan Chiu, Tien-Hong Lo, Berlin Chen

An important research direction in automatic speech recognition (ASR) has centered around the development of effective methods to rerank the output hypotheses of an ASR system with more sophisticated language models (LMs) for further gains. A current mainstream school of thoughts for ASR N-best hypothesis reranking is to employ a recurrent neural network (RNN)-based LM or its variants, with performance superiority over the conventional n-gram LMs across a range of ASR tasks. In real scenarios such as a long conversation, a sequence of consecutive sentences may jointly contain ample cues of conversation-level information such as topical coherence, lexical entrainment and adjacency pairs, which however remains to be underexplored. In view of this, we first formulate ASR N-best reranking as a prediction problem, putting forward an effective cross-sentence neural LM approach that reranks the ASR N-best hypotheses of an upcoming sentence by taking into consideration the word usage in its precedent sentences. Furthermore, we also explore to extract task-specific global topical information of the cross-sentence history in an unsupervised manner for better ASR performance. Extensive experiments conducted on the AMI conversational benchmark corpus indicate the effectiveness and feasibility of our methods in comparison to several state-of-the-art reranking methods.

* More extensions and experiments are under exploration 

  Access Paper or Ask Questions

Training and Inference for Integer-Based Semantic Segmentation Network

Nov 30, 2020
Jiayi Yang, Lei Deng, Yukuan Yang, Yuan Xie, Guoqi Li

Semantic segmentation has been a major topic in research and industry in recent years. However, due to the computation complexity of pixel-wise prediction and backpropagation algorithm, semantic segmentation has been demanding in computation resources, resulting in slow training and inference speed and large storage space to store models. Existing schemes that speed up segmentation network change the network structure and come with noticeable accuracy degradation. However, neural network quantization can be used to reduce computation load while maintaining comparable accuracy and original network structure. Semantic segmentation networks are different from traditional deep convolutional neural networks (DCNNs) in many ways, and this topic has not been thoroughly explored in existing works. In this paper, we propose a new quantization framework for training and inference of segmentation networks, where parameters and operations are constrained to 8-bit integer-based values for the first time. Full quantization of the data flow and the removal of square and root operations in batch normalization give our framework the ability to perform inference on fixed-point devices. Our proposed framework is evaluated on mainstream semantic segmentation networks like FCN-VGG16 and DeepLabv3-ResNet50, achieving comparable accuracy against floating-point framework on ADE20K dataset and PASCAL VOC 2012 dataset.

* 17 page, 12 figures, submitted to Neurocomputing 

  Access Paper or Ask Questions

VLUC: An Empirical Benchmark for Video-Like Urban Computing on Citywide Crowd and Traffic Prediction

Nov 16, 2019
Renhe Jiang, Zekun Cai, Zhaonan Wang, Chuang Yang, Zipei Fan, Xuan Song, Kota Tsubouchi, Ryosuke Shibasaki

Nowadays, massive urban human mobility data are being generated from mobile phones, car navigation systems, and traffic sensors. Predicting the density and flow of the crowd or traffic at a citywide level becomes possible by using the big data and cutting-edge AI technologies. It has been a very significant research topic with high social impact, which can be widely applied to emergency management, traffic regulation, and urban planning. In particular, by meshing a large urban area to a number of fine-grained mesh-grids, citywide crowd and traffic information in a continuous time period can be represented like a video, where each timestamp can be seen as one video frame. Based on this idea, a series of methods have been proposed to address video-like prediction for citywide crowd and traffic. In this study, we publish a new aggregated human mobility dataset generated from a real-world smartphone application and build a standard benchmark for such kind of video-like urban computing with this new dataset and the existing open datasets. We first comprehensively review the state-of-the-art works of literature and formulate the density and in-out flow prediction problem, then conduct a thorough performance assessment for those methods. With this benchmark, we hope researchers can easily follow up and quickly launch a new solution on this topic.

  Access Paper or Ask Questions

Multi-task Learning For Detecting and Segmenting Manipulated Facial Images and Videos

Jun 17, 2019
Huy H. Nguyen, Fuming Fang, Junichi Yamagishi, Isao Echizen

Detecting manipulated images and videos is an important topic in digital media forensics. Most detection methods use binary classification to determine the probability of a query being manipulated. Another important topic is locating manipulated regions (i.e., performing segmentation), which are mostly created by three commonly used attacks: removal, copy-move, and splicing. We have designed a convolutional neural network that uses the multi-task learning approach to simultaneously detect manipulated images and videos and locate the manipulated regions for each query. Information gained by performing one task is shared with the other task and thereby enhance the performance of both tasks. A semi-supervised learning approach is used to improve the network's generability. The network includes an encoder and a Y-shaped decoder. Activation of the encoded features is used for the binary classification. The output of one branch of the decoder is used for segmenting the manipulated regions while that of the other branch is used for reconstructing the input, which helps improve overall performance. Experiments using the FaceForensics and FaceForensics++ databases demonstrated the network's effectiveness against facial reenactment attacks and face swapping attacks as well as its ability to deal with the mismatch condition for previously seen attacks. Moreover, fine-tuning using just a small amount of data enables the network to deal with unseen attacks.

* Accepted to be Published in Proceedings of the IEEE International Conference on Biometrics: Theory, Applications and Systems (BTAS) 2019, Florida, USA 

  Access Paper or Ask Questions