Invited talk

Prithvi-EO: An Open-Access Geospatial Foundation Model Advancing Earth Science through Global Collaboration

Abstract

Significant advancements in adaptable, reusable artificial intelligence (AI) models are transforming Earth science and remote sensing. Foundation models, pre-trained on extensive unlabeled datasets via self-supervision, are being developed and fine-tuned to advance diverse downstream tasks with minimal labeled data. Here, we present Prithvi-EO-2.0, a transformer-based geospatial foundation model collaboratively developed by 42 members from 12 institutions across US, Europe, and Brazil and led by NASA and IBM. Prithvi-EO is pre-trained on over 7 years of multispectral satellite imagery from NASA's Harmonized Landsat and Sentinel-2 (HLS) global dataset, using ~4.2 million samples for training and 45,568 for validation. Our sampling strategy ensures representation across all land use and land cover classes and more than 800 ecoregions, resulting in a dataset three times larger than previous versions that maximizes landscape diversity. Prithvi-EO leverages a vision transformer architecture with 3D spatiotemporal embeddings, supporting tasks such as flood and wildfire scar mapping, crop classification, and carbon cycle analysis. Rigorous benchmarking against leading models demonstrates state-of-the-art performance, reducing computational requirements and labeled data needs for downstream applications. The model, fine-tuning workflows, and training data are openly released via Hugging Face and GitHub, exemplifying a commitment to transparency, accessibility, and scientific rigor.

Awarded the AGU Open Science Recognition Prize, the Prithvi-EO team sets a new benchmark for open, collaborative research. By fostering interdisciplinary partnerships, co-designing with domain scientists, and prioritizing education and outreach, Prithvi-EO accelerates scientific discovery and democratizes access to advanced geospatial AI for the global Earth science community.