Alert button

"Time Series Analysis": models, code, and papers
Alert button

Machine Learning Algorithms for Time Series Analysis and Forecasting

Add code
Alert button
Nov 25, 2022
Rameshwar Garg, Shriya Barpanda, Girish Rao Salanke N S, Ramya S

Figure 1 for Machine Learning Algorithms for Time Series Analysis and Forecasting
Figure 2 for Machine Learning Algorithms for Time Series Analysis and Forecasting
Figure 3 for Machine Learning Algorithms for Time Series Analysis and Forecasting
Figure 4 for Machine Learning Algorithms for Time Series Analysis and Forecasting

Time series data is being used everywhere, from sales records to patients' health evolution metrics. The ability to deal with this data has become a necessity, and time series analysis and forecasting are used for the same. Every Machine Learning enthusiast would consider these as very important tools, as they deepen the understanding of the characteristics of data. Forecasting is used to predict the value of a variable in the future, based on its past occurrences. A detailed survey of the various methods that are used for forecasting has been presented in this paper. The complete process of forecasting, from preprocessing to validation has also been explained thoroughly. Various statistical and deep learning models have been considered, notably, ARIMA, Prophet and LSTMs. Hybrid versions of Machine Learning models have also been explored and elucidated. Our work can be used by anyone to develop a good understanding of the forecasting process, and to identify various state of the art models which are being used today.

* 9 Pages, 4 Figures, 9 Formulae, 1 Table, 6th International Conference on Microelectronics, Computing & Communication Systems (MCCS-2021), Paper ID: MCCS21084, Presented at MCCS-2021, Accepted, In Press 

TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis

Add code
Alert button
Oct 05, 2022
Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, Mingsheng Long

Figure 1 for TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
Figure 2 for TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
Figure 3 for TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis
Figure 4 for TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis

Time series analysis is of immense importance in extensive applications, such as weather forecasting, anomaly detection, and action recognition. This paper focuses on temporal variation modeling, which is the common key problem of extensive analysis tasks. Previous methods attempt to accomplish this directly from the 1D time series, which is extremely challenging due to the intricate temporal patterns. Based on the observation of multi-periodicity in time series, we ravel out the complex temporal variations into the multiple intraperiod- and interperiod-variations. To tackle the limitations of 1D time series in representation capability, we extend the analysis of temporal variations into the 2D space by transforming the 1D time series into a set of 2D tensors based on multiple periods. This transformation can embed the intraperiod- and interperiod-variations into the columns and rows of the 2D tensors respectively, making the 2D-variations to be easily modeled by 2D kernels. Technically, we propose the TimesNet with TimesBlock as a task-general backbone for time series analysis. TimesBlock can discover the multi-periodicity adaptively and extract the complex temporal variations from transformed 2D tensors by a parameter-efficient inception block. Our proposed TimesNet achieves consistent state-of-the-art in five mainstream time series analysis tasks, including short- and long-term forecasting, imputation, classification, and anomaly detection.

Machine Learning with Probabilistic Law Discovery: A Concise Introduction

Add code
Alert button
Dec 22, 2022
Alexander Demin, Denis Ponomaryov

Figure 1 for Machine Learning with Probabilistic Law Discovery: A Concise Introduction
Figure 2 for Machine Learning with Probabilistic Law Discovery: A Concise Introduction

Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning. In several aspects, PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined. The learning procedure of PLD solves the optimization problem related to the search for rules (called probabilistic laws), which have a minimal length and relatively high probability. At inference, ensembles of these rules are used for prediction. Probabilistic laws are human-readable and PLD based models are transparent and inherently interpretable. Applications of PLD include classification/clusterization/regression tasks, as well as time series analysis/anomaly detection and adaptive (robotic) control. In this paper, we outline the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.

Twitter's Agenda-Setting Role: A Study of Twitter Strategy for Political Diversion

Add code
Alert button
Dec 16, 2022
Yuyang Chen, Xiaoyu Cui, Yunjie Song, Manli Wu

Figure 1 for Twitter's Agenda-Setting Role: A Study of Twitter Strategy for Political Diversion
Figure 2 for Twitter's Agenda-Setting Role: A Study of Twitter Strategy for Political Diversion
Figure 3 for Twitter's Agenda-Setting Role: A Study of Twitter Strategy for Political Diversion
Figure 4 for Twitter's Agenda-Setting Role: A Study of Twitter Strategy for Political Diversion

This study verified the effectiveness of Donald Trump's Twitter campaign in guiding agen-da-setting and deflecting political risk and examined Trump's Twitter communication strategy and explores the communication effects of his tweet content during Covid-19 pandemic. We collected all tweets posted by Trump on the Twitter platform from January 1, 2020 to December 31, 2020.We used Ordinary Least Squares (OLS) regression analysis with a fixed effects model to analyze the existence of the Twitter strategy. The correlation between the number of con-firmed daily Covid-19 diagnoses and the number of particular thematic tweets was investigated using time series analysis. Empirical analysis revealed Twitter's strategy is used to divert public attention from negative Covid-19 reports during the epidemic, and it posts a powerful political communication effect on Twitter. However, findings suggest that Trump did not use false claims to divert political risk and shape public opinion.

* 14 pages, 6 tables 

A Functional approach for Two Way Dimension Reduction in Time Series

Add code
Alert button
Jan 01, 2023
Aniruddha Rajendra Rao, Haiyan Wang, Chetan Gupta

Figure 1 for A Functional approach for Two Way Dimension Reduction in Time Series
Figure 2 for A Functional approach for Two Way Dimension Reduction in Time Series
Figure 3 for A Functional approach for Two Way Dimension Reduction in Time Series
Figure 4 for A Functional approach for Two Way Dimension Reduction in Time Series

The rise in data has led to the need for dimension reduction techniques, especially in the area of non-scalar variables, including time series, natural language processing, and computer vision. In this paper, we specifically investigate dimension reduction for time series through functional data analysis. Current methods for dimension reduction in functional data are functional principal component analysis and functional autoencoders, which are limited to linear mappings or scalar representations for the time series, which is inefficient. In real data applications, the nature of the data is much more complex. We propose a non-linear function-on-function approach, which consists of a functional encoder and a functional decoder, that uses continuous hidden layers consisting of continuous neurons to learn the structure inherent in functional data, which addresses the aforementioned concerns in the existing approaches. Our approach gives a low dimension latent representation by reducing the number of functional features as well as the timepoints at which the functions are observed. The effectiveness of the proposed model is demonstrated through multiple simulations and real data examples.

* IEEE BigData 2022  
* 10 pages, 4 figures, 4 tables 

Convolution-enhanced Evolving Attention Networks

Add code
Alert button
Dec 16, 2022
Yujing Wang, Yaming Yang, Zhuo Li, Jiangang Bai, Mingliang Zhang, Xiangtai Li, Jing Yu, Ce Zhang, Gao Huang, Yunhai Tong

Figure 1 for Convolution-enhanced Evolving Attention Networks
Figure 2 for Convolution-enhanced Evolving Attention Networks
Figure 3 for Convolution-enhanced Evolving Attention Networks
Figure 4 for Convolution-enhanced Evolving Attention Networks

Attention-based neural networks, such as Transformers, have become ubiquitous in numerous applications, including computer vision, natural language processing, and time-series analysis. In all kinds of attention networks, the attention maps are crucial as they encode semantic dependencies between input tokens. However, most existing attention networks perform modeling or reasoning based on representations, wherein the attention maps of different layers are learned separately without explicit interactions. In this paper, we propose a novel and generic evolving attention mechanism, which directly models the evolution of inter-token relationships through a chain of residual convolutional modules. The major motivations are twofold. On the one hand, the attention maps in different layers share transferable knowledge, thus adding a residual connection can facilitate the information flow of inter-token relationships across layers. On the other hand, there is naturally an evolutionary trend among attention maps at different abstraction levels, so it is beneficial to exploit a dedicated convolution-based module to capture this process. Equipped with the proposed mechanism, the convolution-enhanced evolving attention networks achieve superior performance in various applications, including time-series representation, natural language understanding, machine translation, and image classification. Especially on time-series representation tasks, Evolving Attention-enhanced Dilated Convolutional (EA-DC-) Transformer outperforms state-of-the-art models significantly, achieving an average of 17% improvement compared to the best SOTA. To the best of our knowledge, this is the first work that explicitly models the layer-wise evolution of attention maps. Our implementation is available at https://github.com/pkuyym/EvolvingAttention

* Extension of the previous work (arXiv:2102.12895). arXiv admin note: text overlap with arXiv:2102.12895 

A plug-in graph neural network to boost temporal sensitivity in fMRI analysis

Add code
Alert button
Jan 01, 2023
Irmak Sivgin, Hasan A. Bedel, Şaban Öztürk, Tolga Çukur

Figure 1 for A plug-in graph neural network to boost temporal sensitivity in fMRI analysis
Figure 2 for A plug-in graph neural network to boost temporal sensitivity in fMRI analysis
Figure 3 for A plug-in graph neural network to boost temporal sensitivity in fMRI analysis
Figure 4 for A plug-in graph neural network to boost temporal sensitivity in fMRI analysis

Learning-based methods have recently enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) time series. Deep learning models that receive as input functional connectivity (FC) features among brain regions have been commonly adopted in the literature. However, many models focus on temporally static FC features across a scan, reducing sensitivity to dynamic features of brain activity. Here, we describe a plug-in graph neural network that can be flexibly integrated into a main learning-based fMRI model to boost its temporal sensitivity. Receiving brain regions as nodes and blood-oxygen-level-dependent (BOLD) signals as node inputs, the proposed GraphCorr method leverages a node embedder module based on a transformer encoder to capture temporally-windowed latent representations of BOLD signals. GraphCorr also leverages a lag filter module to account for delayed interactions across nodes by computing cross-correlation of windowed BOLD signals across a range of time lags. Information captured by the two modules is fused via a message passing algorithm executed on the graph, and enhanced node features are then computed at the output. These enhanced features are used to drive a subsequent learning-based model to analyze fMRI time series with elevated sensitivity. Comprehensive demonstrations on two public datasets indicate improved classification performance and interpretability for several state-of-the-art graphical and convolutional methods that employ GraphCorr-derived feature representations of fMRI time series as their input.

Deep learning delay coordinate dynamics for chaotic attractors from partial observable data

Add code
Alert button
Nov 20, 2022
Charles D. Young, Michael D. Graham

Figure 1 for Deep learning delay coordinate dynamics for chaotic attractors from partial observable data
Figure 2 for Deep learning delay coordinate dynamics for chaotic attractors from partial observable data
Figure 3 for Deep learning delay coordinate dynamics for chaotic attractors from partial observable data
Figure 4 for Deep learning delay coordinate dynamics for chaotic attractors from partial observable data

A common problem in time series analysis is to predict dynamics with only scalar or partial observations of the underlying dynamical system. For data on a smooth compact manifold, Takens theorem proves a time delayed embedding of the partial state is diffeomorphic to the attractor, although for chaotic and highly nonlinear systems learning these delay coordinate mappings is challenging. We utilize deep artificial neural networks (ANNs) to learn discrete discrete time maps and continuous time flows of the partial state. Given training data for the full state, we also learn a reconstruction map. Thus, predictions of a time series can be made from the current state and several previous observations with embedding parameters determined from time series analysis. The state space for time evolution is of comparable dimension to reduced order manifold models. These are advantages over recurrent neural network models, which require a high dimensional internal state or additional memory terms and hyperparameters. We demonstrate the capacity of deep ANNs to predict chaotic behavior from a scalar observation on a manifold of dimension three via the Lorenz system. We also consider multivariate observations on the Kuramoto-Sivashinsky equation, where the observation dimension required for accurately reproducing dynamics increases with the manifold dimension via the spatial extent of the system.

Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods

Add code
Alert button
Dec 06, 2022
Nesryne Mejri, Laura Lopez-Fuentes, Kankana Roy, Pavel Chernakov, Enjie Ghorbel, Djamila Aouada

Figure 1 for Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods
Figure 2 for Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods
Figure 3 for Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods
Figure 4 for Unsupervised Anomaly Detection in Time-series: An Extensive Evaluation and Analysis of State-of-the-art Methods

Unsupervised anomaly detection in time-series has been extensively investigated in the literature. Notwithstanding the relevance of this topic in numerous application fields, a complete and extensive evaluation of recent state-of-the-art techniques is still missing. Few efforts have been made to compare existing unsupervised time-series anomaly detection methods rigorously. However, only standard performance metrics, namely precision, recall, and F1-score are usually considered. Essential aspects for assessing their practical relevance are therefore neglected. This paper proposes an original and in-depth evaluation study of recent unsupervised anomaly detection techniques in time-series. Instead of relying solely on standard performance metrics, additional yet informative metrics and protocols are taken into account. In particular, (1) more elaborate performance metrics specifically tailored for time-series are used; (2) the model size and the model stability are studied; (3) an analysis of the tested approaches with respect to the anomaly type is provided; and (4) a clear and unique protocol is followed for all experiments. Overall, this extensive analysis aims to assess the maturity of state-of-the-art time-series anomaly detection, give insights regarding their applicability under real-world setups and provide to the community a more complete evaluation protocol.

Non-Parametric and Regularized Dynamical Wasserstein Barycenters for Time-Series Analysis

Add code
Alert button
Oct 07, 2022
Kevin C. Cheng, Shuchin Aeron, Michael C. Hughes, Eric L. Miller

Figure 1 for Non-Parametric and Regularized Dynamical Wasserstein Barycenters for Time-Series Analysis
Figure 2 for Non-Parametric and Regularized Dynamical Wasserstein Barycenters for Time-Series Analysis
Figure 3 for Non-Parametric and Regularized Dynamical Wasserstein Barycenters for Time-Series Analysis
Figure 4 for Non-Parametric and Regularized Dynamical Wasserstein Barycenters for Time-Series Analysis

We consider probabilistic time-series models for systems that gradually transition among a finite number of states. We are particularly motivated by applications such as human activity analysis where the observed time-series contains segments representing distinct activities such as running or walking as well as segments characterized by continuous transition among these states. Accordingly, the dynamical Wasserstein barycenter (DWB) model introduced in Cheng et al. in 2021 [1] associates with each state, which we call a pure state, its own probability distribution, and models these continuous transitions with the dynamics of the barycentric weights that combine the pure state distributions via the Wasserstein barycenter. Here, focusing on the univariate case where Wasserstein distances and barycenters can be computed in closed form, we extend [1] by discussing two challenges associated with learning a DWB model and two improvements. First, we highlight the issue of uniqueness in identifying the model parameters. Secondly, we discuss the challenge of estimating a dynamically evolving distribution given a limited number of samples. The uncertainty associated with this estimation may cause a model's learned dynamics to not reflect the gradual transitions characteristic of the system. The first improvement introduces a regularization framework that addresses this uncertainty by imposing temporal smoothness on the dynamics of the barycentric weights while leveraging the understanding of the non-uniqueness of the problem. This is done without defining an entire stochastic model for the dynamics of the system as in [1]. Our second improvement lifts the Gaussian assumption on the pure states distributions in [1] by proposing a quantile-based non-parametric representation. We pose model estimation in a variational framework and propose a finite approximation to the infinite dimensional problem.