Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Birds Eye View Social Distancing Analysis System

Dec 14, 2021
Zhengye Yang, Mingfei Sun, Hongzhe Ye, Zihao Xiong, Gil Zussman, Zoran Kostic

Social distancing can reduce the infection rates in respiratory pandemics such as COVID-19. Traffic intersections are particularly suitable for monitoring and evaluation of social distancing behavior in metropolises. We propose and evaluate a privacy-preserving social distancing analysis system (B-SDA), which uses bird's-eye view video recordings of pedestrians who cross traffic intersections. We devise algorithms for video pre-processing, object detection and tracking which are rooted in the known computer-vision and deep learning techniques, but modified to address the problem of detecting very small objects/pedestrians captured by a highly elevated camera. We propose a method for incorporating pedestrian grouping for detection of social distancing violations. B-SDA is used to compare pedestrian behavior based on pre-pandemic and pandemic videos in a major metropolitan area. The accomplished pedestrian detection performance is $63.0\%$ $AP_{50}$ and the tracking performance is $47.6\%$ MOTA. The social distancing violation rate of $15.6\%$ during the pandemic is notably lower than $31.4\%$ pre-pandemic baseline, indicating that pedestrians followed CDC-prescribed social distancing recommendations. The proposed system is suitable for deployment in real-world applications.


  Access Paper or Ask Questions

Learning Speaker Representation with Semi-supervised Learning approach for Speaker Profiling

Oct 24, 2021
Shangeth Rajaa, Pham Van Tung, Chng Eng Siong

Speaker profiling, which aims to estimate speaker characteristics such as age and height, has a wide range of applications inforensics, recommendation systems, etc. In this work, we propose a semisupervised learning approach to mitigate the issue of low training data for speaker profiling. This is done by utilizing external corpus with speaker information to train a better representation which can help to improve the speaker profiling systems. Specifically, besides the standard supervised learning path, the proposed framework has two more paths: (1) an unsupervised speaker representation learning path that helps to capture the speaker information; (2) a consistency training path that helps to improve the robustness of the system by enforcing it to produce similar predictions for utterances of the same speaker.The proposed approach is evaluated on the TIMIT and NISP datasets for age, height, and gender estimation, while the Librispeech is used as the unsupervised external corpus. Trained both on single-task and multi-task settings, our approach was able to achieve state-of-the-art results on age estimation on the TIMIT Test dataset with Root Mean Square Error(RMSE) of6.8 and 7.4 years and Mean Absolute Error(MAE) of 4.8 and5.0 years for male and female speakers respectively.

* 5 pages, 4 figures 

  Access Paper or Ask Questions

BERT based sentiment analysis: A software engineering perspective

Jun 28, 2021
Himanshu Batra, Narinder Singh Punn, Sanjay Kumar Sonbhadra, Sonali Agarwal

Sentiment analysis can provide a suitable lead for the tools used in software engineering along with the API recommendation systems and relevant libraries to be used. In this context, the existing tools like SentiCR, SentiStrength-SE, etc. exhibited low f1-scores that completely defeats the purpose of deployment of such strategies, thereby there is enough scope for performance improvement. Recent advancements show that transformer based pre-trained models (e.g., BERT, RoBERTa, ALBERT, etc.) have displayed better results in the text classification task. Following this context, the present research explores different BERT-based models to analyze the sentences in GitHub comments, Jira comments, and Stack Overflow posts. The paper presents three different strategies to analyse BERT based model for sentiment analysis, where in the first strategy the BERT based pre-trained models are fine-tuned; in the second strategy an ensemble model is developed from BERT variants, and in the third strategy a compressed model (Distil BERT) is used. The experimental results show that the BERT based ensemble approach and the compressed BERT model attain improvements by 6-12% over prevailing tools for the F1 measure on all three datasets.


  Access Paper or Ask Questions

Learning to Rehearse in Long Sequence Memorization

Jun 02, 2021
Zhu Zhang, Chang Zhou, Jianxin Ma, Zhijie Lin, Jingren Zhou, Hongxia Yang, Zhou Zhao

Existing reasoning tasks often have an important assumption that the input contents can be always accessed while reasoning, requiring unlimited storage resources and suffering from severe time delay on long sequences. To achieve efficient reasoning on long sequences with limited storage resources, memory augmented neural networks introduce a human-like write-read memory to compress and memorize the long input sequence in one pass, trying to answer subsequent queries only based on the memory. But they have two serious drawbacks: 1) they continually update the memory from current information and inevitably forget the early contents; 2) they do not distinguish what information is important and treat all contents equally. In this paper, we propose the Rehearsal Memory (RM) to enhance long-sequence memorization by self-supervised rehearsal with a history sampler. To alleviate the gradual forgetting of early information, we design self-supervised rehearsal training with recollection and familiarity tasks. Further, we design a history sampler to select informative fragments for rehearsal training, making the memory focus on the crucial information. We evaluate the performance of our rehearsal memory by the synthetic bAbI task and several downstream tasks, including text/video question answering and recommendation on long sequences.

* Accepted by ICML 2021 

  Access Paper or Ask Questions

GIPA: General Information Propagation Algorithm for Graph Learning

May 13, 2021
Qinkai Zheng, Houyi Li, Peng Zhang, Zhixiong Yang, Guowei Zhang, Xintan Zeng, Yongchao Liu

Graph neural networks (GNNs) have been popularly used in analyzing graph-structured data, showing promising results in various applications such as node classification, link prediction and network recommendation. In this paper, we present a new graph attention neural network, namely GIPA, for attributed graph data learning. GIPA consists of three key components: attention, feature propagation and aggregation. Specifically, the attention component introduces a new multi-layer perceptron based multi-head to generate better non-linear feature mapping and representation than conventional implementations such as dot-product. The propagation component considers not only node features but also edge features, which differs from existing GNNs that merely consider node features. The aggregation component uses a residual connection to generate the final embedding. We evaluate the performance of GIPA using the Open Graph Benchmark proteins (ogbn-proteins for short) dataset. The experimental results reveal that GIPA can beat the state-of-the-art models in terms of prediction accuracy, e.g., GIPA achieves an average ROC-AUC of $0.8700\pm 0.0010$ and outperforms all the previous methods listed in the ogbn-proteins leaderboard.

* 4 pages, 1 figure, technical report 

  Access Paper or Ask Questions

Bipartite Graph Embedding via Mutual Information Maximization

Dec 10, 2020
Jiangxia Cao, Xixun Lin, Shu Guo, Luchen Liu, Tingwen Liu, Bin Wang

Bipartite graph embedding has recently attracted much attention due to the fact that bipartite graphs are widely used in various application domains. Most previous methods, which adopt random walk-based or reconstruction-based objectives, are typically effective to learn local graph structures. However, the global properties of bipartite graph, including community structures of homogeneous nodes and long-range dependencies of heterogeneous nodes, are not well preserved. In this paper, we propose a bipartite graph embedding called BiGI to capture such global properties by introducing a novel local-global infomax objective. Specifically, BiGI first generates a global representation which is composed of two prototype representations. BiGI then encodes sampled edges as local representations via the proposed subgraph-level attention mechanism. Through maximizing the mutual information between local and global representations, BiGI enables nodes in bipartite graph to be globally relevant. Our model is evaluated on various benchmark datasets for the tasks of top-K recommendation and link prediction. Extensive experiments demonstrate that BiGI achieves consistent and significant improvements over state-of-the-art baselines. Detailed analyses verify the high effectiveness of modeling the global properties of bipartite graph.

* Accepted by WSDM 2021 

  Access Paper or Ask Questions

Learning Optimal Tree Models Under Beam Search

Jun 27, 2020
Jingwei Zhuo, Ziru Xu, Wei Dai, Han Zhu, Han Li, Jian Xu, Kun Gai

Retrieving relevant targets from an extremely large target set under computational limits is a common challenge for information retrieval and recommendation systems. Tree models, which formulate targets as leaves of a tree with trainable node-wise scorers, have attracted a lot of interests in tackling this challenge due to their logarithmic computational complexity in both training and testing. Tree-based deep models (TDMs) and probabilistic label trees (PLTs) are two representative kinds of them. Though achieving many practical successes, existing tree models suffer from the training-testing discrepancy, where the retrieval performance deterioration caused by beam search in testing is not considered in training. This leads to an intrinsic gap between the most relevant targets and those retrieved by beam search with even the optimally trained node-wise scorers. We take a first step towards understanding and analyzing this problem theoretically, and develop the concept of Bayes optimality under beam search and calibration under beam search as general analyzing tools for this purpose. Moreover, to eliminate the discrepancy, we propose a novel algorithm for learning optimal tree models under beam search. Experiments on both synthetic and real data verify the rationality of our theoretical analysis and demonstrate the superiority of our algorithm compared to state-of-the-art methods.

* To appear in the 37th International Conference on Machine Learning (ICML 2020) 

  Access Paper or Ask Questions

Spacematch: Using environmental preferences to match occupants to suitable activity-based workspaces

Jun 17, 2020
Tapeesh Sood, Patrick Janssen, Clayton Miller

The activity-based workspace (ABW) paradigm is becoming more popular in commercial office spaces. In this strategy, occupants are given a choice of spaces to do their work and personal activities on a day-to-day basis. This paper shows the implementation and testing of the Spacematch platform that was designed to improve the allocation and management of ABW. An experiment was implemented to test the ability to characterize the preferences of occupants to match them with suitable environmentally-comfortable and spatially-efficient flexible workspaces. This approach connects occupants with a catalog of available work desks using a web-based mobile application and enables them to provide real-time environmental feedback. In this work, we tested the ability for this feedback data to be merged with indoor environmental values from Internet-of-Things (IoT) sensors to optimize space and energy use by grouping occupants with similar preferences. This paper outlines a case study implementation of this platform on two office buildings. This deployment collected 1,182 responses from 25 field-based research participants over a 30-day study. From this initial data set, the results show that the ABW occupants can be segmented into specific types of users based on their accumulated preference data, and matching preferences can be derived to build a recommendation platform.


  Access Paper or Ask Questions

An Artificial Intelligence Solution for Electricity Procurement in Forward Markets

Jun 10, 2020
Thibaut Théate, Sébastien Mathieu, Damien Ernst

Retailers and major consumers of electricity generally purchase a critical percentage of their estimated electricity needs years ahead on the forward markets. This long-term electricity procurement task consists of determining when to buy electricity so that the resulting energy cost is minimised, and the forecast consumption is covered. In this scientific article, the focus is set on a yearly base load product, named calendar (CAL), which is tradable up to three years ahead of the delivery period. This research paper introduces a novel algorithm providing recommendations to either buy electricity now or wait for a future opportunity based on the history of CAL prices. This algorithm relies on deep learning forecasting techniques and on an indicator quantifying the deviation from a perfectly uniform reference procurement strategy. Basically, a new purchase operation is advised when this mathematical indicator hits the trigger associated with the market direction predicted by the forecaster. On average, the proposed approach surpasses benchmark procurement strategies and achieves a reduction in costs of 1.65% with respect to the perfectly uniform reference procurement strategy achieving the mean electricity price. Moreover, in addition to automating the electricity procurement task, this algorithm demonstrates more consistent results throughout the years compared to the benchmark strategies.

* Scientific research paper submitted to the Elsevier journal named "Journal of Commodity Markets" 

  Access Paper or Ask Questions

<<
375
376
377
378
379
380
381
382
383
384
385
386
387
>>