As the popularity of Location-based Social Networks (LBSNs) increases, designing accurate models for Point-of-Interest (POI) recommendation receives more attention. POI recommendation is often performed by incorporating contextual information into previously designed recommendation algorithms. Some of the major contextual information that has been considered in POI recommendation are the location attributes (i.e., exact coordinates of a location, category, and check-in time), the user attributes (i.e., comments, reviews, tips, and check-in made to the locations), and other information, such as the distance of the POI from user's main activity location, and the social tie between users. The right selection of such factors can significantly impact the performance of the POI recommendation. However, previous research does not consider the impact of the combination of these different factors. In this paper, we propose different contextual models and analyze the fusion of different major contextual information in POI recommendation. The major contributions of this paper are: (i) providing an extensive survey of context-aware location recommendation (ii) quantifying and analyzing the impact of different contextual information (e.g., social, temporal, spatial, and categorical) in the POI recommendation on available baselines and two new linear and non-linear models, that can incorporate all the major contextual information into a single recommendation model, and (iii) evaluating the considered models using two well-known real-world datasets. Our results indicate that while modeling geographical and temporal influences can improve recommendation quality, fusing all other contextual information into a recommendation model is not always the best strategy.
Digital twin technology has a huge potential for widespread applications in different industrial sectors such as infrastructure, aerospace, and automotive. However, practical adoptions of this technology have been slower, mainly due to a lack of application-specific details. Here we focus on a digital twin framework for linear single-degree-of-freedom structural dynamic systems evolving in two different operational time scales in addition to its intrinsic dynamic time-scale. Our approach strategically separates into two components -- (a) a physics-based nominal model for data processing and response predictions, and (b) a data-driven machine learning model for the time-evolution of the system parameters. The physics-based nominal model is system-specific and selected based on the problem under consideration. On the other hand, the data-driven machine learning model is generic. For tracking the multi-scale evolution of the system parameters, we propose to exploit a mixture of experts as the data-driven model. Within the mixture of experts model, Gaussian Process (GP) is used as the expert model. The primary idea is to let each expert track the evolution of the system parameters at a single time-scale. For learning the hyperparameters of the `mixture of experts using GP', an efficient framework the exploits expectation-maximization and sequential Monte Carlo sampler is used. Performance of the digital twin is illustrated on a multi-timescale dynamical system with stiffness and/or mass variations. The digital twin is found to be robust and yields reasonably accurate results. One exciting feature of the proposed digital twin is its capability to provide reasonable predictions at future time-steps. Aspects related to the data quality and data quantity are also investigated.
The incidences of atrial fibrillation (AFib) are increasing at a daunting rate worldwide. For the early detection of the risk of AFib, we have developed an automatic detection system based on deep neural networks. For achieving better classification, it is mandatory to have good pre-processing of physiological signals. Keeping this in mind, we have proposed a two-fold study. First, an end-to-end model is proposed to denoise the electrocardiogram signals using denoising autoencoders (DAE). To achieve denoising, we have used three networks including, convolutional neural network (CNN), dense neural network (DNN), and recurrent neural networks (RNN). Compared the three models and CNN based DAE performance is found to be better than the other two. Therefore, the signals denoised by the CNN based DAE were used to train the deep neural networks for classification. Three neural networks' performance has been evaluated using accuracy, specificity, sensitivity, and signal to noise ratio (SNR) as the evaluation criteria. The proposed end-to-end deep learning model for detecting atrial fibrillation in this study has achieved an accuracy rate of 99.20%, a specificity of 99.50%, a sensitivity of 99.50%, and a true positive rate of 99.00%. The average accuracy of the algorithms we compared is 96.26%, and our algorithm's accuracy is 3.2% higher than this average of the other algorithms. The CNN classification network performed better as compared to the other two. Additionally, the model is computationally efficient for real-time applications, and it takes approx 1.3 seconds to process 24 hours ECG signal. The proposed model was also tested on unseen dataset with different proportions of arrhythmias to examine the model's robustness, which resulted in 99.10% of recall and 98.50% of precision.
The temporal and spatial resolution of rainfall data is crucial for climate change modeling studies in which its variability in space and time is considered as a primary factor. Rainfall products from different remote sensing instruments (e.g., radar or satellite) provide different space-time resolutions because of the differences in their sensing capabilities. We developed an approach that augments rainfall data with increased time resolutions to complement relatively lower resolution products. This study proposes a neural network architecture based on Convolutional Neural Networks (CNNs) to improve temporal resolution of radar-based rainfall products and compares the proposed model with an optical flow-based interpolation method.
This paper addresses the difficulty of forecasting multiple financial time series (TS) conjointly using deep neural networks (DNN). We investigate whether DNN-based models could forecast these TS more efficiently by learning their representation directly. To this end, we make use of the dynamic factor graph (DFG) from that we enhance by proposing a novel variable-length attention-based mechanism to render it memory-augmented. Using this mechanism, we propose an unsupervised DNN architecture for multivariate TS forecasting that allows to learn and take advantage of the relationships between these TS. We test our model on two datasets covering 19 years of investment funds activities. Our experimental results show that our proposed approach outperforms significantly typical DNN-based and statistical models at forecasting their 21-day price trajectory.
Orthogonal time frequency space (OTFS) modulation has recently emerged as an effective waveform to tackle the linear time-varying channels. In OTFS literature, approximately constant channel gains for every group of samples within each OTFS block are assumed. This leads to limitations for OTFS on the maximum Doppler frequency that it can tolerate. Additionally, presence of cyclic prefix (CP) in OTFS signal limits the flexibility in adjusting its parameters to improve its robustness against channel time variations. Therefore, in this paper, we study the possibility of removing the CP overhead from OTFS and breaking its Doppler limitations through multiple antenna processing in the large antenna regime. We asymptotically analyze the performance of time-reversal maximum ratio combining (TR-MRC) for OTFS without CP. We show that doubly dispersive channel effects average out in the large antenna regime when the maximum Doppler shift is within OTFS limitations. However, for considerably large Doppler shifts exceeding OTFS limitations, a residual Doppler effect remains. Our asymptotic derivations reveal that this effect converges to scaling of the received symbols in delay dimension with the samples of a Bessel function that depends on the maximum Doppler shift. Hence, we propose a novel residual Doppler correction (RDC) windowing technique that can break the Doppler limitations of OTFS and lead to a performance close to that of the linear time-invariant channels. Finally, we confirm the validity of our claims through simulations.
Classifiers are often utilized in time-constrained settings where labels must be assigned to inputs quickly. To address these scenarios, budgeted multi-stage classifiers (MSC) process inputs through a sequence of partial feature acquisition and evaluation steps with early-exit options until a confident prediction can be made. This allows for fast evaluation that can prevent expensive, unnecessary feature acquisition in time-critical instances. However, performance of MSCs is highly sensitive to several design aspects -- making optimization of these systems an important but difficult problem. To approximate an initially intractable combinatorial problem, current approaches to MSC configuration rely on well-behaved surrogate loss functions accounting for two primary objectives (processing cost, error). These approaches have proven useful in many scenarios but are limited by analytic constraints (convexity, smoothness, etc.) and do not manage additional performance objectives. Notably, such methods do not explicitly account for an important aspect of real-time detection systems -- the ratio of "accepted" predictions satisfying some confidence criterion imposed by a risk-averse monitor. This paper proposes a problem-specific genetic algorithm, EMSCO, that incorporates a terminal reject option for indecisive predictions and treats MSC design as an evolutionary optimization problem with distinct objectives (accuracy, cost, coverage). The algorithm's design emphasizes Pareto efficiency while respecting a notion of aggregated performance via a unique scalarization. Experiments are conducted to demonstrate EMSCO's ability to find global optima in a variety of Theta(k^n) solution spaces, and multiple experiments show EMSCO is competitive with alternative budgeted approaches.
Stroke rehabilitation seeks to increase neuroplasticity through the repeated practice of functional motions, but may have minimal impact on recovery because of insufficient repetitions. The optimal training content and quantity are currently unknown because no practical tools exist to measure them. Here, we present PrimSeq, a pipeline to classify and count functional motions trained in stroke rehabilitation. Our approach integrates wearable sensors to capture upper-body motion, a deep learning model to predict motion sequences, and an algorithm to tally motions. The trained model accurately decomposes rehabilitation activities into component functional motions, outperforming competitive machine learning methods. PrimSeq furthermore quantifies these motions at a fraction of the time and labor costs of human experts. We demonstrate the capabilities of PrimSeq in previously unseen stroke patients with a range of upper extremity motor impairment. We expect that these advances will support the rigorous measurement required for quantitative dosing trials in stroke rehabilitation.
Capturing and simulating intelligent adaptive behaviours within spatially explicit individual-based models remains an ongoing challenge for researchers. While an ever-increasing abundance of real-world behavioural data are collected, few approaches exist that can quantify and formalise key individual behaviours and how they change over space and time. Consequently, commonly used agent decision-making frameworks, such as event-condition-action rules, are often required to focus only on a narrow range of behaviours. We argue that these behavioural frameworks often do not reflect real-world scenarios and fail to capture how behaviours can develop in response to stimuli. There has been an increased interest in Machine Learning methods and their potential to simulate intelligent adaptive behaviours in recent years. One method that is beginning to gain traction in this area is Reinforcement Learning (RL). This paper explores how RL can be applied to create emergent agent behaviours using a simple predator-prey Agent-Based Model (ABM). Running a series of simulations, we demonstrate that agents trained using the novel Proximal Policy Optimisation (PPO) algorithm behave in ways that exhibit properties of real-world intelligent adaptive behaviours, such as hiding, evading and foraging.
Information field theory (IFT), the information theory for fields, is a mathematical framework for signal reconstruction and non-parametric inverse problems. Here, fields denote physical quantities that change continuously as a function of space (and time) and information theory refers to Bayesian probabilistic logic equipped with the associated entropic information measures. Reconstructing a signal with IFT is a computational problem similar to training a generative neural network (GNN). In this paper, the inference in IFT is reformulated in terms of GNN training and the cross-fertilization of numerical variational inference methods used in IFT and machine learning are discussed. The discussion suggests that IFT inference can be regarded as a specific form of artificial intelligence. In contrast to classical neural networks, IFT based GNNs can operate without pre-training thanks to incorporating expert knowledge into their architecture.