Ruoqi Liu, Pin-Yu Chen, et al.
Patterns
Learning to classify time series with limited data is a practical yet challenging problem.
Current methods are primarily based on hand-designed feature extraction rules or domain-specific data augmentation. Motivated by the advances in deep speech processing models and the fact that voice data are univariate temporal signals, in this paper we propose Voice2Series (V2S), a novel end-to-end approach that reprograms acoustic models for time series classification, through input transformation learning and output label mapping.
Leveraging the representation learning power of a large-scale pre-trained speech processing model, on 31 different time series tasks we show that V2S outperforms or is on part with state-of-the-art methods on 22 tasks, and improves their average accuracy by 1.72%.
We further provide theoretical justification of V2S by proving its population risk is upper bounded by the source risk and a Wasserstein distance accounting for feature alignment via reprogramming. Our results offer new and effective means to time series classification.
Ruoqi Liu, Pin-Yu Chen, et al.
Patterns
Hazar Yueksel, Ramon Bertran, et al.
MLSys 2020
Saiteja Utpala, Alex Gu, et al.
NAACL 2024
Ingkarat Rak-amnouykit, Ana Milanova, et al.
ICLR 2021