Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Time Series Analysis": models, code, and papers

Fast Stability Scanning for Future Grid Scenario Analysis

Dec 14, 2016
Ruidong Liu, Gregor Verbic, Jin Ma

Future grid scenario analysis requires a major departure from conventional power system planning, where only a handful of most critical conditions is typically analyzed. To capture the inter-seasonal variations in renewable generation of a future grid scenario necessitates the use of computationally intensive time-series analysis. In this paper, we propose a planning framework for fast stability scanning of future grid scenarios using a novel feature selection algorithm and a novel self-adaptive PSO-k-means clustering algorithm. To achieve the computational speed-up, the stability analysis is performed only on small number of representative cluster centroids instead of on the full set of operating conditions. As a case study, we perform small-signal stability and steady-state voltage stability scanning of a simplified model of the Australian National Electricity Market with significant penetration of renewable generation. The simulation results show the effectiveness of the proposed approach. Compared to an exhaustive time series scanning, the proposed framework reduced the computational burden up to ten times, with an acceptable level of accuracy.

* 10 pages, 7 figures, 2 tables. Submitted for publicatiob to IEEE Transactions on Power Systems 
  

Chatter Detection in Turning Using Machine Learning and Similarity Measures of Time Series via Dynamic Time Warping

Aug 05, 2019
Melih C. Yesilli, Firas A. Khasawneh, Andreas Otto

Chatter detection from sensor signals has been an active field of research. While some success has been reported using several featurization tools and machine learning algorithms, existing methods have several drawbacks such as manual preprocessing and requiring a large data set. In this paper, we present an alternative approach for chatter detection based on K-Nearest Neighbor (kNN) algorithm for classification and the Dynamic Time Warping (DTW) as a time series similarity measure. The used time series are the acceleration signals acquired from the tool holder in a series of turning experiments. Our results, show that this approach achieves detection accuracies that in most cases outperform existing methods. We compare our results to the traditional methods based on Wavelet Packet Transform (WPT) and the Ensemble Empirical Mode Decomposition (EEMD), as well as to the more recent Topological Data Analysis (TDA) based approach. We show that in three out of four cutting configurations our DTW-based approach attains the highest average classification rate reaching in one case as high as 99% accuracy. Our approach does not require feature extraction, is capable of reusing a classifier across different cutting configurations, and it uses reasonably sized training sets. Although the resulting high accuracy in our approach is associated with high computational cost, this is specific to the DTW implementation that we used. Specifically, we highlight available, very fast DTW implementations that can even be implemented on small consumer electronics. Therefore, further code optimization and the significantly reduced computational effort during the implementation phase make our approach a viable option for in-process chatter detection.

  

A prediction perspective on the Wiener-Hopf equations for discrete time series

Jul 11, 2021
Suhasini Subba Rao, Junho Yang

The Wiener-Hopf equations are a Toeplitz system of linear equations that have several applications in time series. These include the update and prediction step of the stationary Kalman filter equations and the prediction of bivariate time series. The Wiener-Hopf technique is the classical tool for solving the equations, and is based on a comparison of coefficients in a Fourier series expansion. The purpose of this note is to revisit the (discrete) Wiener-Hopf equations and obtain an alternative expression for the solution that is more in the spirit of time series analysis. Specifically, we propose a solution to the Wiener-Hopf equations that combines linear prediction with deconvolution. The solution of the Wiener-Hopf equations requires one to obtain the spectral factorization of the underlying spectral density function. For general spectral density functions this is infeasible. Therefore, it is usually assumed that the spectral density is rational, which allows one to obtain a computationally tractable solution. This leads to an approximation error when the underlying spectral density is not a rational function. We use the proposed solution together with Baxter's inequality to derive an error bound for the rational spectral density approximation.

  

Enhancing Cancer Prediction in Challenging Screen-Detected Incident Lung Nodules Using Time-Series Deep Learning

Mar 30, 2022
Shahab Aslani, Pavan Alluri, Eyjolfur Gudmundsson, Edward Chandy, John McCabe, Anand Devaraj, Carolyn Horst, Sam M Janes, Rahul Chakkara, Arjun Nair, Daniel C Alexander, SUMMIT consortium, Joseph Jacob

Lung cancer is the leading cause of cancer-related mortality worldwide. Lung cancer screening (LCS) using annual low-dose computed tomography (CT) scanning has been proven to significantly reduce lung cancer mortality by detecting cancerous lung nodules at an earlier stage. Improving risk stratification of malignancy risk in lung nodules can be enhanced using machine/deep learning algorithms. However most existing algorithms: a) have primarily assessed single time-point CT data alone thereby failing to utilize the inherent advantages contained within longitudinal imaging datasets; b) have not integrated into computer models pertinent clinical data that might inform risk prediction; c) have not assessed algorithm performance on the spectrum of nodules that are most challenging for radiologists to interpret and where assistance from analytic tools would be most beneficial. Here we show the performance of our time-series deep learning model (DeepCAD-NLM-L) which integrates multi-model information across three longitudinal data domains: nodule-specific, lung-specific, and clinical demographic data. We compared our time-series deep learning model to a) radiologist performance on CTs from the National Lung Screening Trial enriched with the most challenging nodules for diagnosis; b) a nodule management algorithm from a North London LCS study (SUMMIT). Our model demonstrated comparable and complementary performance to radiologists when interpreting challenging lung nodules and showed improved performance (AUC=88\%) against models utilizing single time-point data only. The results emphasise the importance of time-series, multi-modal analysis when interpreting malignancy risk in LCS.

  

Financial series prediction using Attention LSTM

Feb 28, 2019
Sangyeon Kim, Myungjoo Kang

Financial time series prediction, especially with machine learning techniques, is an extensive field of study. In recent times, deep learning methods (especially time series analysis) have performed outstandingly for various industrial problems, with better prediction than machine learning methods. Moreover, many researchers have used deep learning methods to predict financial time series with various models in recent years. In this paper, we will compare various deep learning models, such as multilayer perceptron (MLP), one-dimensional convolutional neural networks (1D CNN), stacked long short-term memory (stacked LSTM), attention networks, and weighted attention networks for financial time series prediction. In particular, attention LSTM is not only used for prediction, but also for visualizing intermediate outputs to analyze the reason of prediction; therefore, we will show an example for understanding the model prediction intuitively with attention vectors. In addition, we focus on time and factors, which lead to an easy understanding of why certain trends are predicted when accessing a given time series table. We also modify the loss functions of the attention models with weighted categorical cross entropy; our proposed model produces a 0.76 hit ratio, which is superior to those of other methods for predicting the trends of the KOSPI 200.

  

A Comparative Study of Detecting Anomalies in Time Series Data Using LSTM and TCN Models

Dec 17, 2021
Saroj Gopali, Faranak Abri, Sima Siami-Namini, Akbar Siami Namin

There exist several data-driven approaches that enable us model time series data including traditional regression-based modeling approaches (i.e., ARIMA). Recently, deep learning techniques have been introduced and explored in the context of time series analysis and prediction. A major research question to ask is the performance of these many variations of deep learning techniques in predicting time series data. This paper compares two prominent deep learning modeling techniques. The Recurrent Neural Network (RNN)-based Long Short-Term Memory (LSTM) and the convolutional Neural Network (CNN)-based Temporal Convolutional Networks (TCN) are compared and their performance and training time are reported. According to our experimental results, both modeling techniques perform comparably having TCN-based models outperform LSTM slightly. Moreover, the CNN-based TCN model builds a stable model faster than the RNN-based LSTM models.

* 15 pages, 3 figures, IEEE BigData 2021 
  

Multi-Decoder RNN Autoencoder Based on Variational Bayes Method

Apr 29, 2020
Daisuke Kaji, Kazuho Watanabe, Masahiro Kobayashi

Clustering algorithms have wide applications and play an important role in data analysis fields including time series data analysis. However, in time series analysis, most of the algorithms used signal shape features or the initial value of hidden variable of a neural network. Little has been discussed on the methods based on the generative model of the time series. In this paper, we propose a new clustering algorithm focusing on the generative process of the signal with a recurrent neural network and the variational Bayes method. Our experiments show that the proposed algorithm not only has a robustness against for phase shift, amplitude and signal length variations but also provide a flexible clustering based on the property of the variational Bayes method.

* 8 pages, 11 figures, accepted for publication in IJCNN 
  

Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology

May 30, 2018
Thomas Teh, Chaiyawan Auepanwiriyakul, John Alexander Harston, A. Aldo Faisal

Deep Learning methods, specifically convolutional neural networks (CNNs), have seen a lot of success in the domain of image-based data, where the data offers a clearly structured topology in the regular lattice of pixels. This 4-neighbourhood topological simplicity makes the application of convolutional masks straightforward for time series data, such as video applications, but many high-dimensional time series data are not organised in regular lattices, and instead values may have adjacency relationships with non-trivial topologies, such as small-world networks or trees. In our application case, human kinematics, it is currently unclear how to generalise convolutional kernels in a principled manner. Therefore we define and implement here a framework for general graph-structured CNNs for time series analysis. Our algorithm automatically builds convolutional layers using the specified adjacency matrix of the data dimensions and convolutional masks that scale with the hop distance. In the limit of a lattice-topology our method produces the well-known image convolutional masks. We test our method first on synthetic data of arbitrarily-connected graphs and human hand motion capture data, where the hand is represented by a tree capturing the mechanical dependencies of the joints. We are able to demonstrate, amongst other things, that inclusion of the graph structure of the data dimensions improves model prediction significantly, when compared against a benchmark CNN model with only time convolution layers.

  

Methods for Mapping Forest Disturbance and Degradation from Optical Earth Observation Data: a Review

Mar 22, 2017
Manuela Hirschmugl, Heinz Gallaun, Matthias Dees, Pawan Datta, Janik Deutscher, Nikos Koutsias, Mathias Schardt

Purpose of review: This paper presents a review of the current state of the art in remote sensing based monitoring of forest disturbances and forest degradation from optical Earth Observation data. Part one comprises an overview of currently available optical remote sensing sensors, which can be used for forest disturbance and degradation mapping. Part two reviews the two main categories of existing approaches: classical image-to-image change detection and time series analysis. Recent findings: With the launch of the Sentinel-2a satellite and available Landsat imagery, time series analysis has become the most promising but also most demanding category of degradation mapping approaches. Four time series classification methods are distinguished. The methods are explained and their benefits and drawbacks are discussed. A separate chapter presents a number of recent forest degradation mapping studies for two different ecosystems: temperate forests with a geographical focus on Europe and tropical forests with a geographical focus on Africa. Summary: The review revealed that a wide variety of methods for the detection of forest degradation is already available. Today, the main challenge is to transfer these approaches to high resolution time series data from multiple sensors. Future research should also focus on the classification of disturbance types and the development of robust up-scalable methods to enable near real time disturbance mapping in support of operational reactive measures.

* Current Forestry Reports 2017 
* This is the Authors' accepted version only! The final version of this paper can be located at Springer.com as part of the Current Forestry Reports (2017) 3: 32. doi:10.1007/s40725-017-0047-2 
  

Construe: a software solution for the explanation-based interpretation of time series

Mar 17, 2020
Tomas Teijeiro, Paulo Felix

This paper presents a software implementation of a general framework for time series interpretation based on abductive reasoning. The software provides a data model and a set of algorithms to make inference to the best explanation of a time series, resulting in a description in multiple abstraction levels of the processes underlying the time series. As a proof of concept, a comprehensive knowledge base for the electrocardiogram (ECG) domain is provided, so it can be used directly as a tool for ECG analysis. This tool has been successfully validated in several noteworthy problems, such as heartbeat classification or atrial fibrillation detection.

* Original Software Publication. 10 pages, 4 figures 
  
<<
17
18
19
20
21
22
23
24
25
26
27
28
29
>>