Alert button
Picture for Azad Singh

Azad Singh

Alert button

Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping

Apr 24, 2022
Vandan Gorade, Azad Singh, Deepak Mishra

Figure 1 for Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping
Figure 2 for Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping
Figure 3 for Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping
Figure 4 for Large Scale Time-Series Representation Learning via Simultaneous Low and High Frequency Feature Bootstrapping

Learning representation from unlabeled time series data is a challenging problem. Most existing self-supervised and unsupervised approaches in the time-series domain do not capture low and high-frequency features at the same time. Further, some of these methods employ large scale models like transformers or rely on computationally expensive techniques such as contrastive learning. To tackle these problems, we propose a non-contrastive self-supervised learning approach efficiently captures low and high-frequency time-varying features in a cost-effective manner. Our method takes raw time series data as input and creates two different augmented views for two branches of the model, by randomly sampling the augmentations from same family. Following the terminology of BYOL, the two branches are called online and target network which allows bootstrapping of the latent representation. In contrast to BYOL, where a backbone encoder is followed by multilayer perceptron (MLP) heads, the proposed model contains additional temporal convolutional network (TCN) heads. As the augmented views are passed through large kernel convolution blocks of the encoder, the subsequent combination of MLP and TCN enables an effective representation of low as well as high-frequency time-varying features due to the varying receptive fields. The two modules (MLP and TCN) act in a complementary manner. We train an online network where each module learns to predict the outcome of the respective module of target network branch. To demonstrate the robustness of our model we performed extensive experiments and ablation studies on five real-world time-series datasets. Our method achieved state-of-art performance on all five real-world datasets.

Viaarxiv icon