Alert button
Picture for Sizhe Song

Sizhe Song

Alert button

A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis

Oct 18, 2023
Shuhan Zhong, Sizhe Song, Guanyao Li, Weipeng Zhuo, Yang Liu, S. -H. Gary Chan

Figure 1 for A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis
Figure 2 for A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis
Figure 3 for A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis
Figure 4 for A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis

Time series data, often characterized by unique composition and complex multi-scale temporal variations, requires special consideration of decomposition and multi-scale modeling in its analysis. Existing deep learning methods on this best fit to only univariate time series, and have not sufficiently accounted for sub-series level modeling and decomposition completeness. To address this, we propose MSD-Mixer, a Multi-Scale Decomposition MLP-Mixer which learns to explicitly decompose the input time series into different components, and represents the components in different layers. To handle multi-scale temporal patterns and inter-channel dependencies, we propose a novel temporal patching approach to model the time series as multi-scale sub-series, i.e., patches, and employ MLPs to mix intra- and inter-patch variations and channel-wise correlations. In addition, we propose a loss function to constrain both the magnitude and autocorrelation of the decomposition residual for decomposition completeness. Through extensive experiments on various real-world datasets for five common time series analysis tasks (long- and short-term forecasting, imputation, anomaly detection, and classification), we demonstrate that MSD-Mixer consistently achieves significantly better performance in comparison with other state-of-the-art task-general and task-specific approaches.

Viaarxiv icon

Semi-Supervised Few-Shot Atomic Action Recognition

Nov 17, 2020
Xiaoyuan Ni, Sizhe Song, Yu-Wing Tai, Chi-Keung Tang

Figure 1 for Semi-Supervised Few-Shot Atomic Action Recognition
Figure 2 for Semi-Supervised Few-Shot Atomic Action Recognition
Figure 3 for Semi-Supervised Few-Shot Atomic Action Recognition
Figure 4 for Semi-Supervised Few-Shot Atomic Action Recognition

Despite excellent progress has been made, the performance on action recognition still heavily relies on specific datasets, which are difficult to extend new action classes due to labor-intensive labeling. Moreover, the high diversity in Spatio-temporal appearance requires robust and representative action feature aggregation and attention. To address the above issues, we focus on atomic actions and propose a novel model for semi-supervised few-shot atomic action recognition. Our model features unsupervised and contrastive video embedding, loose action alignment, multi-head feature comparison, and attention-based aggregation, together of which enables action recognition with only a few training examples through extracting more representative features and allowing flexibility in spatial and temporal alignment and variations in the action. Experiments show that our model can attain high accuracy on representative atomic action datasets outperforming their respective state-of-the-art classification accuracy in full supervision setting.

* 7 pages, 3 figures, 2 tables 
Viaarxiv icon