AI-powered satellites will upend how we observe our changing planet
IBM Research, along with NASA, ESA and others, have open-sourced drastically smaller versions of their Prithvi and TerraMind Earth observation models that can run on just about any device, from satellites orbiting the planet, to the smartphone in your pocket — while maintaining their performance levels.
Our world is changing faster than at any other period in recorded human history. Natural disasters, animal extinction, and deforestation are just some of the challenges that organizations around the world are working to mitigate every single day. AI offers new ways to tackle these pressing issues, but those in the field closest to the problems aren’t often able to make use of these tools. IBM Research is looking to change that, using devices just like the one you’re reading this on.
Today, IBM is open-sourcing new lightweight versions of its geospatial and Earth observation models, specially designed to run on edge devices like laptops and smartphones. These new “tiny” and “small” versions of the Prithvi and TerraMind models can run on consumer devices — with very little drop off in performance over their industry-leading predecessors. These models could reshape how we think about doing science in regions far from the lab, whether that’s in the vacuum of space or the savanna.
Observing the planet — wherever you are
Earlier this year, IBM launched TerraMind, a multimodal generative AI model for Earth observation (EO). The model was developed within FAST-EO, an initiative led by a consortium comprising the German Aerospace Center (DLR), Forschungszentrum Jülich, IBM Research, and KP Labs, and supported and funded by ESA's Φ-lab. TerraMind currently leads open community benchmarks like PANAGEA, being the only model to surpass traditional EO methods.
And late last year, IBM unveiled the latest version of Prithvi, Prithvi-EO 2.0, built with NASA and JSC. This new geospatial model greatly improved its abilities and performance, allowing users to analyze data points over seasons, helping us better understand the planet’s dynamic changes. The open-source model was recently awarded the American Geophysical Union’s 2025 Open Science Recognition Prize.
While powerful, TerraMind and Prithvi are not particularly portable. Previous versions required powerful computers to run, and so couldn’t be used to tackle problems in real time. Someone looking to use one of the models out in the world would need to send their data to more powerful hardware, which would take time and slow down their experiments. That led the team to work on versions that could be adapted to more devices.
The new tiny and small versions use a "This is a pre-trained neural network layer whose weights are kept constant, while other parts of the model are trained. This can improve efficiency and prevent the model forgetting as often.frozen encoder" to achieve very similar results on the PANGAEA benchmark. There was less than a 10% difference between Prithvi 600M and Prithvi.tiny — even though the latter is 120 times smaller.
A new space for AI
The size and efficacy of these models opens all sorts of new potential applications, including some beyond the bounds of our atmosphere.
Modern satellites are essentially solar-powered computers packed with sensors, orbiting our planet. Today, they’re often loaded with simple machine-learning models that can carry out specific tasks as they orbit. But that software is usually installed before heading into space, as satellite uplinks are costly.
With IBM’s new models, the frozen encoder can be uploaded before a satellite is launched. Then, a lightweight, task-specific decoder head for the models can be beamed up as it’s in orbit. These files are tiny, usually only around 1 to 2 MBs for classification tasks, meaning they can be uploaded to a satellite as it orbits with relative ease. This could extend the abilities of satellites, and potentially revolutionize the way we use data in space. It’s a step towards the idea of software-defined satellites.
With these new versions of Prithvi and TerraMind, satellites can be used to analyze and interpret information in real time, using on-device processing. Instead of just using satellites to collect data, they could now be used to do inference as well. The satellites could sift through the petabytes of data they collect and send only what’s pertinent back down to Earth. The real-time aspect is important particularly for climate disaster management, where every minute counts to save lives, as was demonstrated by ESA Φ-lab in the D-Orbit mission.
Together with the space technology company Unibap Space Solutions, the team behind the models tested them on devices in conditions similar to those on a satellite orbiting Earth. TerraMind.tiny and Prithvi.tiny were uploaded to Unibap’s iX5 and iX10 computing platforms. The iX5 platform has been used on satellites operated by NASA.
On the iX10, TerraMind.tiny could infer 325 frames per second (fps) on 224×224-pixel images, with 12 spectral bands and a batch size of 32. Prithvi.tiny achieved 329 fps using six bands. This corresponds to more than 2 Gbit/s of processed data, compared with the raw data rate of a Sentinel-2 satellite at roughly 1 Gbit/s. The smaller iX5 system’s inference speeds were roughly five times slower, but that is still more than enough for running smaller real-time analysis tasks onboard.
Parameters | TerraMind.tiny | TerraMind.small | Prithvi EO 2.0 tiny TL | Prithvi EO 2.0 100M TL |
---|---|---|---|---|
Parameters | 5M | 20M | 5M | 87M |
Encoder memory | 23Mb | 86Mb | 22Mb | 328Mb |
Multimodal | ✅ | ✅ | ⚙️ | ⚙️ |
Multitemporal | ⚙️ | ⚙️ | ✅ | ✅ |
Generative | ✅ | ✅ | ❌ | ❌ |
Performance drop*** | 15%* | 10%* | 4%** | 1%** |
GFLOPs*** | 9% | 15% | 9% | 36% |
Frames/second on iX10 | 325 | — | 329 | — |
Ideal for | Hardware with limited resources (e.g. satellites) | Development & edge devices (e.g. phones) | Hardware with limited resources (e.g. satellites) | Development with close to maximum performance |
A model family comparison table.1 ⚙️ = Extensions available via TerraTorch.
These new models can run on far more modest hardware too. In building TerraMind.tiny, the team used IBM TerraTorch to fine-tune the model to be able to detect elephants from drone imagery. This was based a project IBM undertook with WWF to protect elephant species in Africa in 2024. The team was able to run the entire model on an iPhone 16 Pro and infer images from a drone’s live video stream, without the need for cloud computing or network connectivity.
To show just how small yet mighty these models are, the team built two examples that can run right in the browser of any device you’re on. One is a demo of TerraMind.tiny using inference to find elephants in images, much like the WWF application. The other was built with Prithvi-EO-2.0 Tiny to classify land use from images.
"With the .tiny and .small models, we’re adding to the TerraMind and Prithvi families, and bringing our AI Earth observation capabilities to edge devices,” said Juan Bernabé-Moreno, director of IBM Research’s labs in the UK and Ireland, and leader of the division’s climate and sustainability projects. “We also want to lower the adoption barrier for the development community, by drastically reducing the hardware requirements — anyone with a conventional laptop could fine-tune these models and create new applications to better monitor our planet.”
All four new models are now available under an Apache 2.0 license on the IBM-NASA and IBM-ESA Hugging Face pages. With a new idea, a bit of fine-tuning, and just about any device you have on hand, you have the potential to unlock myriad new ways to observe, understand, and care for our planet.
“In the end, what really defines a model is how much value it helps unlock,” Bernabé-Moreno added.
Notes
- Note 1: This is a pre-trained neural network layer whose weights are kept constant, while other parts of the model are trained. This can improve efficiency and prevent the model forgetting as often. ↩︎
References
-
*Results obtained with a frozen backbone as per the PANGAEA benchmarking protocol. **Results obtained with full finetuning as per the GEO-Bench benchmarking protocol. ***Compared to the large version (~300M). ↩
Related posts
- ResearchKim Martineau
How IBM built an AI model to discover railroad defects before they’re critical
NewsPeter HessA more fluid way to model time-series data
ResearchKim MartineauIn AI, alignment is the goal. Steerability is how you get there
Q & AKim Martineau