Alert button

"Time Series Analysis": models, code, and papers
Alert button

Neural Time Series Analysis with Fourier Transform: A Survey

Feb 04, 2023
Kun Yi, Qi Zhang, Shoujin Wang, Hui He, Guodong Long, Zhendong Niu

Figure 1 for Neural Time Series Analysis with Fourier Transform: A Survey
Figure 2 for Neural Time Series Analysis with Fourier Transform: A Survey
Figure 3 for Neural Time Series Analysis with Fourier Transform: A Survey
Figure 4 for Neural Time Series Analysis with Fourier Transform: A Survey

Recently, Fourier transform has been widely introduced into deep neural networks to further advance the state-of-the-art regarding both accuracy and efficiency of time series analysis. The advantages of the Fourier transform for time series analysis, such as efficiency and global view, have been rapidly explored and exploited, exhibiting a promising deep learning paradigm for time series analysis. However, although increasing attention has been attracted and research is flourishing in this emerging area, there lacks a systematic review of the variety of existing studies in the area. To this end, in this paper, we provide a comprehensive review of studies on neural time series analysis with Fourier transform. We aim to systematically investigate and summarize the latest research progress. Accordingly, we propose a novel taxonomy to categorize existing neural time series analysis methods from four perspectives, including characteristics, usage paradigms, network design, and applications. We also share some new research directions in this vibrant area.

Mobile Mapping Mesh Change Detection and Update

Mar 13, 2023
Teng Wu, Bruno Vallet, Cédric Demonceaux

Figure 1 for Mobile Mapping Mesh Change Detection and Update
Figure 2 for Mobile Mapping Mesh Change Detection and Update
Figure 3 for Mobile Mapping Mesh Change Detection and Update
Figure 4 for Mobile Mapping Mesh Change Detection and Update

Mobile mapping, in particular, Mobile Lidar Scanning (MLS) is increasingly widespread to monitor and map urban scenes at city scale with unprecedented resolution and accuracy. The resulting point cloud sampling of the scene geometry can be meshed in order to create a continuous representation for different applications: visualization, simulation, navigation, etc. Because of the highly dynamic nature of these urban scenes, long term mapping should rely on frequent map updates. A trivial solution is to simply replace old data with newer data each time a new acquisition is made. However it has two drawbacks: 1) the old data may be of higher quality (resolution, precision) than the new and 2) the coverage of the scene might be different in various acquisitions, including varying occlusions. In this paper, we propose a fully automatic pipeline to address these two issues by formulating the problem of merging meshes with different quality, coverage and acquisition time. Our method is based on a combined distance and visibility based change detection, a time series analysis to assess the sustainability of changes, a mesh mosaicking based on a global boolean optimization and finally a stitching of the resulting mesh pieces boundaries with triangle strips. Finally, our method is demonstrated on Robotcar and Stereopolis datasets.

* 6 pages without reference 

Your time series is worth a binary image: machine vision assisted deep framework for time series forecasting

Feb 28, 2023
Luoxiao Yang, Xinqi Fan, Zijun Zhang

Figure 1 for Your time series is worth a binary image: machine vision assisted deep framework for time series forecasting
Figure 2 for Your time series is worth a binary image: machine vision assisted deep framework for time series forecasting
Figure 3 for Your time series is worth a binary image: machine vision assisted deep framework for time series forecasting
Figure 4 for Your time series is worth a binary image: machine vision assisted deep framework for time series forecasting

Time series forecasting (TSF) has been a challenging research area, and various models have been developed to address this task. However, almost all these models are trained with numerical time series data, which is not as effectively processed by the neural system as visual information. To address this challenge, this paper proposes a novel machine vision assisted deep time series analysis (MV-DTSA) framework. The MV-DTSA framework operates by analyzing time series data in a novel binary machine vision time series metric space, which includes a mapping and an inverse mapping function from the numerical time series space to the binary machine vision space, and a deep machine vision model designed to address the TSF task in the binary space. A comprehensive computational analysis demonstrates that the proposed MV-DTSA framework outperforms state-of-the-art deep TSF models, without requiring sophisticated data decomposition or model customization. The code for our framework is accessible at https://github.com/IkeYang/ machine-vision-assisted-deep-time-series-analysis-MV-DTSA-.

Unsupervised Deep Learning for IoT Time Series

Feb 21, 2023
Ya Liu, Yingjie Zhou, Kai Yang, Xin Wang

Figure 1 for Unsupervised Deep Learning for IoT Time Series
Figure 2 for Unsupervised Deep Learning for IoT Time Series
Figure 3 for Unsupervised Deep Learning for IoT Time Series
Figure 4 for Unsupervised Deep Learning for IoT Time Series

IoT time series analysis has found numerous applications in a wide variety of areas, ranging from health informatics to network security. Nevertheless, the complex spatial temporal dynamics and high dimensionality of IoT time series make the analysis increasingly challenging. In recent years, the powerful feature extraction and representation learning capabilities of deep learning (DL) have provided an effective means for IoT time series analysis. However, few existing surveys on time series have systematically discussed unsupervised DL-based methods. To fill this void, we investigate unsupervised deep learning for IoT time series, i.e., unsupervised anomaly detection and clustering, under a unified framework. We also discuss the application scenarios, public datasets, existing challenges, and future research directions in this area.

* IEEE Internet of Things Journal, 2023  
* 22 pages, 8 figures, has been published by IEEE Internet of Things Journal 

Robust Dominant Periodicity Detection for Time Series with Missing Data

Mar 06, 2023
Qingsong Wen, Linxiao Yang, Liang Sun

Figure 1 for Robust Dominant Periodicity Detection for Time Series with Missing Data
Figure 2 for Robust Dominant Periodicity Detection for Time Series with Missing Data
Figure 3 for Robust Dominant Periodicity Detection for Time Series with Missing Data
Figure 4 for Robust Dominant Periodicity Detection for Time Series with Missing Data

Periodicity detection is an important task in time series analysis, but still a challenging problem due to the diverse characteristics of time series data like abrupt trend change, outlier, noise, and especially block missing data. In this paper, we propose a robust and effective periodicity detection algorithm for time series with block missing data. We first design a robust trend filter to remove the interference of complicated trend patterns under missing data. Then, we propose a robust autocorrelation function (ACF) that can handle missing values and outliers effectively. We rigorously prove that the proposed robust ACF can still work well when the length of the missing block is less than $1/3$ of the period length. Last, by combining the time-frequency information, our algorithm can generate the period length accurately. The experimental results demonstrate that our algorithm outperforms existing periodicity detection algorithms on real-world time series datasets.

* IEEE ICASSP 2023  
* Accepted by 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2023) 

Discovering Predictable Latent Factors for Time Series Forecasting

Mar 18, 2023
Jingyi Hou, Zhen Dong, Jiayu Zhou, Zhijie Liu

Figure 1 for Discovering Predictable Latent Factors for Time Series Forecasting
Figure 2 for Discovering Predictable Latent Factors for Time Series Forecasting
Figure 3 for Discovering Predictable Latent Factors for Time Series Forecasting
Figure 4 for Discovering Predictable Latent Factors for Time Series Forecasting

Modern time series forecasting methods, such as Transformer and its variants, have shown strong ability in sequential data modeling. To achieve high performance, they usually rely on redundant or unexplainable structures to model complex relations between variables and tune the parameters with large-scale data. Many real-world data mining tasks, however, lack sufficient variables for relation reasoning, and therefore these methods may not properly handle such forecasting problems. With insufficient data, time series appear to be affected by many exogenous variables, and thus, the modeling becomes unstable and unpredictable. To tackle this critical issue, in this paper, we develop a novel algorithmic framework for inferring the intrinsic latent factors implied by the observable time series. The inferred factors are used to form multiple independent and predictable signal components that enable not only sparse relation reasoning for long-term efficiency but also reconstructing the future temporal data for accurate prediction. To achieve this, we introduce three characteristics, i.e., predictability, sufficiency, and identifiability, and model these characteristics via the powerful deep latent dynamics models to infer the predictable signal components. Empirical results on multiple real datasets show the efficiency of our method for different kinds of time series forecasting. The statistical analysis validates the predictability of the learned latent factors.

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

Mar 18, 2023
Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang Shen, Jie Zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang

Figure 1 for A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
Figure 2 for A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
Figure 3 for A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
Figure 4 for A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities. However, despite the abundance of research on the difference in capabilities between GPT series models and fine-tuned models, there has been limited attention given to the evolution of GPT series models' capabilities over time. To conduct a comprehensive analysis of the capabilities of GPT series models, we select six representative models, comprising two GPT-3 series models (i.e., davinci and text-davinci-001) and four GPT-3.5 series models (i.e., code-davinci-002, text-davinci-002, text-davinci-003, and gpt-3.5-turbo). We evaluate their performance on nine natural language understanding (NLU) tasks using 21 datasets. In particular, we compare the performance and robustness of different models for each task under zero-shot and few-shot scenarios. Our extensive experiments reveal that the overall ability of GPT series models on NLU tasks does not increase gradually as the models evolve, especially with the introduction of the RLHF training strategy. While this strategy enhances the models' ability to generate human-like responses, it also compromises their ability to solve some tasks. Furthermore, our findings indicate that there is still room for improvement in areas such as model robustness.

Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data

Mar 14, 2023
Rui Xie, Shuyang Bai, Ping Ma

Figure 1 for Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data
Figure 2 for Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data
Figure 3 for Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data
Figure 4 for Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data

The Internet of Things (IoT) system generates massive high-speed temporally correlated streaming data and is often connected with online inference tasks under computational or energy constraints. Online analysis of these streaming time series data often faces a trade-off between statistical efficiency and computational cost. One important approach to balance this trade-off is sampling, where only a small portion of the sample is selected for the model fitting and update. Motivated by the demands of dynamic relationship analysis of IoT system, we study the data-dependent sample selection and online inference problem for a multi-dimensional streaming time series, aiming to provide low-cost real-time analysis of high-speed power grid electricity consumption data. Inspired by D-optimality criterion in design of experiments, we propose a class of online data reduction methods that achieve an optimal sampling criterion and improve the computational efficiency of the online analysis. We show that the optimal solution amounts to a strategy that is a mixture of Bernoulli sampling and leverage score sampling. The leverage score sampling involves auxiliary estimations that have a computational advantage over recursive least squares updates. Theoretical properties of the auxiliary estimations involved are also discussed. When applied to European power grid consumption data, the proposed leverage score based sampling methods outperform the benchmark sampling method in online estimation and prediction. The general applicability of the sampling-assisted online estimation method is assessed via simulation studies.

* Accepted by The Annals of Applied Statistics 

Singular spectrum analysis of time series data from low frequency radiometers, with an application to SITARA data

Feb 15, 2023
Jishnu N. Thekkeppattu, Cathryn M. Trott, Benjamin McKinley

Figure 1 for Singular spectrum analysis of time series data from low frequency radiometers, with an application to SITARA data
Figure 2 for Singular spectrum analysis of time series data from low frequency radiometers, with an application to SITARA data
Figure 3 for Singular spectrum analysis of time series data from low frequency radiometers, with an application to SITARA data
Figure 4 for Singular spectrum analysis of time series data from low frequency radiometers, with an application to SITARA data

Understanding the temporal characteristics of data from low frequency radio telescopes is of importance in devising suitable calibration strategies. Application of time series analysis techniques to data from radio telescopes can reveal a wealth of information that can aid in calibration. In this paper, we investigate singular spectrum analysis (SSA) as an analysis tool for radio data. We show the intimate connection between SSA and Fourier techniques. We develop the relevant mathematics starting with an idealised periodic dataset and proceeding to include various non-ideal behaviours. We propose a novel technique to obtain long-term gain changes in data, leveraging the periodicity arising from sky drift through the antenna beams. We also simulate several plausible scenarios and apply the techniques to a 30-day time series data collected during June 2021 from SITARA - a short-spacing two element interferometer for global 21-cm detection. Applying the techniques to real data, we find that the first reconstructed component - the trend - has a strong anti-correlation with the local temperature suggesting temperature fluctuations as the most likely origin for the observed variations in the data. We also study the limitations of the calibration in the presence of diurnal gain variations and find that such variations are the likely impediment to calibrating SITARA data with SSA.

* Accepted for publication in MNRAS