Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Recent advances in AI-based weather modeling have given rise to large-scale “weather foundation” models pretrained on large amounts of weather data. In this study, we investigate whether the Prithvi weather foundation model pretraining on 40 years of MERRA2 data, characterized by a resolution of 0.5° × 0.625° facilitates adaptation to ERA5, which has a finer resolution of 0.25° × 0.25°. We fine‑tune the Prithvi model on three years of ERA5 data, and investigate if there is any advantage of MERRA pretraining
We compare three architectures during fine‑tuning:
Our experiments reveal:
We quantify the ablation effects to isolate the benefit of MERRA2 pretraining versus architecture choice. Our results suggest that, while MERRA2 pretraining provides a useful initialization, substantial fine‑tuning (in terms of both data volume and compute) remains necessary when transferring to higher‑resolution datasets.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019
Gang Liu, Michael Sun, et al.
ICLR 2025