Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Foundation Models (FMs) are reshaping the landscape of scientific machine learning by enabling general-purpose AI models that can be fine-tuned for a variety of tasks with limited supervision. We developed Prithvi-EO-2.0, a multi-temporal geospatial foundation model for Earth observation. Developed using a global archive of Harmonized Landsat and Sentinel-2 (HLS) satellite imagery spanning a decade, Prithvi-EO-2.0 is trained on 4.2 million multi-temporal image sequences to capture the seasonal and long-term dynamics of the Earth’s surface. Its architecture incorporates masked autoencoding, 3D spatiotemporal patch embeddings, and novel learnable encodings for time and geolocation, trained at scale on the JUWELS supercomputer using 300M and 600M parameter ViT backbones.
Designed for scientific applicability, the model has been extensively benchmarked via GEO-Bench and tested across diverse use cases—flood and wildfire mapping, landslide detection, crop classification, and ecosystem productivity estimation. Its flexible design enables transfer to both medium- and high-resolution applications and supports modalities such as multi-spectral optical and SAR data. Through the TerraTorch framework, the model is fully open-source and ready for community fine-tuning. In this talk, we will describe the design philosophy, training methodology, performance insights, and real-world scientific use cases of Prithvi-EO-2.0, and outline its role as a foundational building block for future digital twins and Earth system FMs.
Robert Farrell, Rajarshi Das, et al.
AAAI-SS 2010
Chen-chia Chang, Wan-hsuan Lin, et al.
ICML 2025
Gang Liu, Michael Sun, et al.
ICLR 2025
Daniel Karl I. Weidele, Hendrik Strobelt, et al.
SysML 2019