Apoorve Mohan, Ming-Hung Chen, et al.
SC 2025
Foundation models (FMs) have transformed natural language processing (NLP), but their successes have not yet translated to the time series domain. Existing time series foundation models (TSFMs) struggle with generalization across varying context and target lengths, lack adaptability to different sampling rates, and are computationally inefficient. We introduce FlowState, a novel TSFM architecture that addresses these challenges through two key innovations: a state space model (SSM) based encoder and a functional basis decoder. This design enables continuous-time modeling, adjustment to various sampling rates, and flexible forecasting horizons without retraining. We further propose a parallel training strategy that enhances robustness and accelerates training. Despite being the smallest model, FlowState achieves state-of-the-art results on the GIFT-ZS and the Chronos-ZS benchmarks, while demonstrating superior adaptability to unseen sampling rates.
Apoorve Mohan, Ming-Hung Chen, et al.
SC 2025
Takumi Yanagawa, Chris Butler
KubeCon Japan 2025
Zaid Qureshi, Vikram Sharma Mailthody, et al.
ASPLOS 2023
Aditya Bhosale, Laxmikant Kale, et al.
FlexScience 2025