Representation Learning of Multivariate Time Series using a Transformer Framework
Abstract
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series. Pre-trained models can be potentially used for downstream tasks such as regression and classification, forecasting and missing value imputation. By evaluating our models on several benchmark datasets for multivariate time series regression and classification, we show that our modeling approach represents the most successful method employing unsupervised learning of multivariate time series presented to date; it is also the first unsupervised approach shown to exceed the current state-of-the-art performance of supervised methods. It does so by a significant margin, even when the number of training samples is very limited, while offering computational efficiency. Finally, we demonstrate that unsupervised pre-training of our transformer models offers a substantial performance benefit over fully supervised learning, even without leveraging additional unlabeled data, i.e., by reusing the same data samples through the unsupervised objective.