Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Time Series Analysis": models, code, and papers

TSAMT: Time-Series-Analysis-based Motion Transfer among Multiple Cameras

Sep 29, 2021
Yaping Zhao, Guanghan Li, Zhongrui Wang

Along with advances in optical sensors is the common practice of building an imaging system with heterogeneous cameras. While high-resolution (HR) videos acquisition and analysis are benefited from hybrid sensors, the intrinsic characteristics of multiple cameras lead to an interesting motion transfer problem. Unfortunately, most of the existing methods provide no theoretical analysis and require intensive training data. In this paper, we propose an algorithm using time series analysis for motion transfer among multiple cameras. Specifically, we firstly identify seasonality in motion data and then build an addictive time series model to extract patterns that could be transferred across cameras. Our approach has a complete and clear mathematical formulation, thus being efficient and interpretable. Through quantitative evaluations on real-world data, we demonstrate the effectiveness of our method. Furthermore, our motion transfer algorithm could combine with and facilitate downstream tasks, e.g., enhancing pose estimation on LR videos with inherent patterns extracted from HR ones. Code is available at https://github.com/IndigoPurple/TSAMT.

* 9 pages, 7 figures 
  
Access Paper or Ask Questions

Data-Driven Copy-Paste Imputation for Energy Time Series

Jan 05, 2021
Moritz Weber, Marian Turowski, Hüseyin K. Çakmak, Ralf Mikut, Uwe Kühnapfel, Veit Hagenmeyer

A cornerstone of the worldwide transition to smart grids are smart meters. Smart meters typically collect and provide energy time series that are vital for various applications, such as grid simulations, fault-detection, load forecasting, load analysis, and load management. Unfortunately, these time series are often characterized by missing values that must be handled before the data can be used. A common approach to handle missing values in time series is imputation. However, existing imputation methods are designed for power time series and do not take into account the total energy of gaps, resulting in jumps or constant shifts when imputing energy time series. In order to overcome these issues, the present paper introduces the new Copy-Paste Imputation (CPI) method for energy time series. The CPI method copies data blocks with similar properties and pastes them into gaps of the time series while preserving the total energy of each gap. The new method is evaluated on a real-world dataset that contains six shares of artificially inserted missing values between 1 and 30%. It outperforms by far the three benchmark imputation methods selected for comparison. The comparison furthermore shows that the CPI method uses matching patterns and preserves the total energy of each gap while requiring only a moderate run-time.

* 8 pages, 7 figures, submitted to IEEE Transactions on Smart Grid, the first two authors equally contributed to this work 
  
Access Paper or Ask Questions

Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection---Extended Version

Apr 07, 2022
Tung Kieu, Bin Yang, Chenjuan Guo, Christian S. Jensen, Yan Zhao, Feiteng Huang, Kai Zheng

Time series data occurs widely, and outlier detection is a fundamental problem in data mining, which has numerous applications. Existing autoencoder-based approaches deliver state-of-the-art performance on challenging real-world data but are vulnerable to outliers and exhibit low explainability. To address these two limitations, we propose robust and explainable unsupervised autoencoder frameworks that decompose an input time series into a clean time series and an outlier time series using autoencoders. Improved explainability is achieved because clean time series are better explained with easy-to-understand patterns such as trends and periodicities. We provide insight into this by means of a post-hoc explainability analysis and empirical studies. In addition, since outliers are separated from clean time series iteratively, our approach offers improved robustness to outliers, which in turn improves accuracy. We evaluate our approach on five real-world datasets and report improvements over the state-of-the-art approaches in terms of robustness and explainability. This is an extended version of "Robust and Explainable Autoencoders for Unsupervised Time Series Outlier Detection", to appear in IEEE ICDE 2022.

* This paper has been accepted by IEEE ICDE 2022 
  
Access Paper or Ask Questions

Time Series Data Augmentation for Deep Learning: A Survey

Feb 27, 2020
Qingsong Wen, Liang Sun, Xiaomin Song, Jingkun Gao, Xue Wang, Huan Xu

Deep learning performs remarkably well on many time series analysis tasks recently. The superior performance of deep neural networks relies heavily on a large number of training data to avoid overfitting. However, the labeled data of many real-world time series applications may be limited such as classification in medical time series and anomaly detection in AIOps. As an effective way to enhance the size and quality of the training data, data augmentation is crucial to the successful application of deep learning models on time series data. In this paper, we systematically review different data augmentation methods for time series. We propose a taxonomy for the reviewed methods, and then provide a structured review for these methods by highlighting their strengths and limitations. We also empirically compare different data augmentation methods for different tasks including time series anomaly detection, classification and forecasting. Finally, we discuss and highlight future research directions, including data augmentation in time-frequency domain, augmentation combination, and data augmentation and weighting for imbalanced class.

* 7 pages, 2 figures, 3 tables, 42 referred papers 
  
Access Paper or Ask Questions

DTWSSE: Data Augmentation with a Siamese Encoder for Time Series

Aug 23, 2021
Xinyu Yang, Xinlan Zhang, Zhenguo Zhang, Yahui Zhao, Rongyi Cui

Access to labeled time series data is often limited in the real world, which constrains the performance of deep learning models in the field of time series analysis. Data augmentation is an effective way to solve the problem of small sample size and imbalance in time series datasets. The two key factors of data augmentation are the distance metric and the choice of interpolation method. SMOTE does not perform well on time series data because it uses a Euclidean distance metric and interpolates directly on the object. Therefore, we propose a DTW-based synthetic minority oversampling technique using siamese encoder for interpolation named DTWSSE. In order to reasonably measure the distance of the time series, DTW, which has been verified to be an effective method forts, is employed as the distance metric. To adapt the DTW metric, we use an autoencoder trained in an unsupervised self-training manner for interpolation. The encoder is a Siamese Neural Network for mapping the time series data from the DTW hidden space to the Euclidean deep feature space, and the decoder is used to map the deep feature space back to the DTW hidden space. We validate the proposed methods on a number of different balanced or unbalanced time series datasets. Experimental results show that the proposed method can lead to better performance of the downstream deep learning model.

* Accepted as full research paper in APWEB-WAIM 2021 
  
Access Paper or Ask Questions

A Periodicity-based Parallel Time Series Prediction Algorithm in Cloud Computing Environments

Oct 17, 2018
Jianguo Chen, Kenli Li, Huigui Rong, Kashif Bilal, Keqin Li, Philip S. Yu

In the era of big data, practical applications in various domains continually generate large-scale time-series data. Among them, some data show significant or potential periodicity characteristics, such as meteorological and financial data. It is critical to efficiently identify the potential periodic patterns from massive time-series data and provide accurate predictions. In this paper, a Periodicity-based Parallel Time Series Prediction (PPTSP) algorithm for large-scale time-series data is proposed and implemented in the Apache Spark cloud computing environment. To effectively handle the massive historical datasets, a Time Series Data Compression and Abstraction (TSDCA) algorithm is presented, which can reduce the data scale as well as accurately extracting the characteristics. Based on this, we propose a Multi-layer Time Series Periodic Pattern Recognition (MTSPPR) algorithm using the Fourier Spectrum Analysis (FSA) method. In addition, a Periodicity-based Time Series Prediction (PTSP) algorithm is proposed. Data in the subsequent period are predicted based on all previous period models, in which a time attenuation factor is introduced to control the impact of different periods on the prediction results. Moreover, to improve the performance of the proposed algorithms, we propose a parallel solution on the Apache Spark platform, using the Streaming real-time computing module. To efficiently process the large-scale time-series datasets in distributed computing environments, Distributed Streams (DStreams) and Resilient Distributed Datasets (RDDs) are used to store and calculate these datasets. Extensive experimental results show that our PPTSP algorithm has significant advantages compared with other algorithms in terms of prediction accuracy and performance.

  
Access Paper or Ask Questions

Joint Time-Vertex Fractional Fourier Transform

Mar 15, 2022
Bünyamin Kartal, Eray Özgünay, Aykut Koç

Graphs signal processing successfully captures high-dimensional data on non-Euclidean domains by using graph signals defined on graph vertices. However, data sources on each vertex can also continually provide time-series signals such that graph signals on each vertex are now time-series signals. Joint time-vertex Fourier transform (JFT) and the associated framework of time-vertex signal processing enable us to study such signals defined on joint time-vertex domains by providing spectral analysis. Just as the fractional Fourier transform (FRT) generalizes the ordinary Fourier transform (FT), we propose the joint time-vertex fractional Fourier transform (JFRT) as a generalization to the JFT. JFRT provides an additional fractional analysis tool for joint time-vertex processing by extending both temporal and vertex domain Fourier analysis to fractional orders. We theoretically show that the proposed JFRT generalizes the JFT and satisfies the properties of index additivity, reversibility, reduction to identity, and unitarity (for certain graph topologies). We provide theoretical derivations for JFRT-based denoising as well as computational cost analysis. Results of numerical experiments are also presented to demonstrate the benefits of JFRT.

* 12 pages, 6 figures 
  
Access Paper or Ask Questions

Analysis of Hydrological and Suspended Sediment Events from Mad River Wastershed using Multivariate Time Series Clustering

Nov 28, 2019
Ali Javed, Scott D. Hamshaw, Donna M. Rizzo, Byung Suk Lee

Hydrological storm events are a primary driver for transporting water quality constituents such as turbidity, suspended sediments and nutrients. Analyzing the concentration (C) of these water quality constituents in response to increased streamflow discharge (Q), particularly when monitored at high temporal resolution during a hydrological event, helps to characterize the dynamics and flux of such constituents. A conventional approach to storm event analysis is to reduce the C-Q time series to two-dimensional (2-D) hysteresis loops and analyze these 2-D patterns. While effective and informative to some extent, this hysteresis loop approach has limitations because projecting the C-Q time series onto a 2-D plane obscures detail (e.g., temporal variation) associated with the C-Q relationships. In this paper, we address this issue using a multivariate time series clustering approach. Clustering is applied to sequences of river discharge and suspended sediment data (acquired through turbidity-based monitoring) from six watersheds located in the Lake Champlain Basin in the northeastern United States. While clusters of the hydrological storm events using the multivariate time series approach were found to be correlated to 2-D hysteresis loop classifications and watershed locations, the clusters differed from the 2-D hysteresis classifications. Additionally, using available meteorological data associated with storm events, we examine the characteristics of computational clusters of storm events in the study watersheds and identify the features driving the clustering approach.

  
Access Paper or Ask Questions

Time Series Analysis and Forecasting of COVID-19 Cases Using LSTM and ARIMA Models

Jun 05, 2020
Arko Barman

Coronavirus disease 2019 (COVID-19) is a global public health crisis that has been declared a pandemic by World Health Organization. Forecasting country-wise COVID-19 cases is necessary to help policymakers and healthcare providers prepare for the future. This study explores the performance of several Long Short-Term Memory (LSTM) models and Auto-Regressive Integrated Moving Average (ARIMA) model in forecasting the number of confirmed COVID-19 cases. Time series of daily cumulative COVID-19 cases were used for generating 1-day, 3-day, and 5-day forecasts using several LSTM models and ARIMA. Two novel k-period performance metrics - k-day Mean Absolute Percentage Error (kMAPE) and k-day Median Symmetric Accuracy (kMdSA) - were developed for evaluating the performance of the models in forecasting time series values for multiple days. Errors in prediction using kMAPE and kMdSA for LSTM models were both as low as 0.05%, while those for ARIMA were 0.07% and 0.06% respectively. LSTM models slightly underestimated while ARIMA slightly overestimated the numbers in the forecasts. The performance of LSTM models is comparable to ARIMA in forecasting COVID-19 cases. While ARIMA requires longer sequences, LSTMs can perform reasonably well with sequence sizes as small as 3. However, LSTMs require a large number of training samples. Further, the development of k-period performance metrics proposed is likely to be useful for performance evaluation of time series models in predicting multiple periods. Based on the k-period performance metrics proposed, both LSTMs and ARIMA are useful for time series analysis and forecasting for COVID-19.

  
Access Paper or Ask Questions

Dynamic Window-level Granger Causality of Multi-channel Time Series

Jun 14, 2020
Zhiheng Zhang, Wenbo Hu, Tian Tian, Jun Zhu

Granger causality method analyzes the time series causalities without building a complex causality graph. However, the traditional Granger causality method assumes that the causalities lie between time series channels and remain constant, which cannot model the real-world time series data with dynamic causalities along the time series channels. In this paper, we present the dynamic window-level Granger causality method (DWGC) for multi-channel time series data. We build the causality model on the window-level by doing the F-test with the forecasting errors on the sliding windows. We propose the causality indexing trick in our DWGC method to reweight the original time series data. Essentially, the causality indexing is to decrease the auto-correlation and increase the cross-correlation causal effects, which improves the DWGC method. Theoretical analysis and experimental results on two synthetic and one real-world datasets show that the improved DWGC method with causality indexing better detects the window-level causalities.

  
Access Paper or Ask Questions
<<
4
5
6
7
8
9
10
11
12
13
14
15
16
>>