Google DeepMind’s AlphaEarth Foundations is a ‘Virtual Satellite’ that sees through clouds. We break down the STP architecture, the 64-dim embeddings, and how to use it in Google Earth Engine today.

Satellite imagery has a “dirty little secret” that costs the global economy billions every year: Cloud cover.
At any given moment, 67% of the Earth is covered by clouds. For a hedge fund tracking soybean yields in Brazil or an insurance adjuster verifying flood damage in Florida, a cloudy image is worse than useless—it’s a data gap.
Google DeepMind’s new solution isn’t to launch more satellites. It’s to launch a “Virtual Satellite.”
Enter AlphaEarth Foundations, a geospatial AI model that doesn’t just photograph the Earth—it understands it. By fusing optical, radar, and climate data into a single “embedding field,” it allows developers to query the planet’s surface state regardless of weather, lighting, or sensor availability.
Here is the deep technical dive on how it works, how to access it in the Earth Engine today, and why it beats the competition.
Table of Contents
The Architecture: Inside the “Space-Time-Precision” Encoder
AlphaEarth is not a standard Convolutional Neural Network (CNN) trained on JPEGs. It is a massive Foundation Model built on a proprietary architecture called the STP (Space-Time-Precision) Encoder.
Unlike traditional models that look at a static image, the STP Encoder processes the Earth as a continuous 4D volume. It uses three distinct operators simultaneously:
- Space Operator (ViT-based): Uses Vision Transformer self-attention (at 1/16L resolution) to understand spatial context (e.g., “This is a forest”).
- Time Operator: A time-axial self-attention layer (1/8L resolution) that understands temporal evolution (e.g., “This forest sheds leaves in November”).
- Precision Operator: A 3×3 convolutional layer (1/2L resolution) that retains fine-grained local details.
The “Embedding” Breakthrough
The output of this model is not an RGB image. It is a 64-dimensional embedding vector for every 10×10 meter square of the planet.
- Old Way: You download terabytes of raw Sentinel-2 TIF files, masking out clouds and correcting for atmospheric haze.
- AlphaEarth Way: You query a single 64-float vector. This vector represents the “mathematical truth” of that location, fused from petabytes of noisy inputs.
Technical Spec: Google claims these embeddings require 16x less storage than equivalent raw data while delivering a 24% lower error rate on downstream classification tasks.
Data Fusion: What is “Under the Hood”?
AlphaEarth Foundations is a “Sensor Fusion” engine. It doesn’t just look at colors; it looks at structure and temperature. The training data includes:
- Optical: Sentinel-2, Landsat (Surface reflectance).
- Radar: Sentinel-1 (SAR) – Crucial for “seeing through” clouds.
- LiDAR: GEDI (Global Ecosystem Dynamics Investigation) – Provides 3D canopy height structure.
- Climate: ERA5 Reanalysis data – Adds temperature and precipitation context.
By training on this mix, the model learns that Pixel A (Cloudy) is the same forest as Pixel B (Sunny) because their Radar and LiDAR signatures match.

Developer Access: How to Use It (Right Now)
Quick Start Code Snippet (JavaScript for GEE)
This script visualizes the “Virtual Earth” by mapping three of the 64 abstract dimensions to RGB colors. Similar colors imply similar physical properties (e.g., crops vs. forests), even across different continents.
// Load the AlphaEarth Annual Embeddings
var dataset = ee.ImageCollection('GOOGLE/SATELLITE_EMBEDDING/V1/ANNUAL')
.filterDate('2023-01-01', '2023-12-31');
// Select 3 arbitrary dimensions (out of 64) to visualize
var visualization = {
bands: ['A00', 'A32', 'A63'], // Dimensions 0, 32, and 63
min: -0.15,
max: 0.15,
};
// Add the "Neural Map" to your viewport
Map.setCenter(-55.0, -10.0, 6); // Centered on Brazil
Map.addLayer(dataset, visualization, 'AlphaEarth Embeddings (False Color)');

Comparison: AlphaEarth vs. IBM Prithvi
The main competitor in the “Geospatial Foundation Model” space is Prithvi, built by IBM and NASA. Which one should you choose?
| Feature | Google AlphaEarth | IBM/NASA Prithvi |
| Model Access | Closed Source (Embeddings only) | Open Source (Hugging Face weights available) |
| Output Type | Pre-computed 64-dim Vectors | Fine-tunable ViT Model |
| Integration | Native in Google Earth Engine | Requires PyTorch/TerraTorch Setup |
| Best For… | Rapid analysis, App Development, “Querying” the planet. | Custom research, novel architectures, running offline. |
| Secret Weapon | Radar/LiDAR Fusion (Sees through weather). | Temporal Flexibility (Better for daily change detection). |
Verdict: Use AlphaEarth if you want speed and are already in the Google Cloud ecosystem. Use Prithvi if you need to own the model or run it on your own GPU cluster.
Commercial Use Cases (High-Revenue Ideas)
- Parametric Insurance (Flood/Fire):
- Concept: Use the embeddings to train a “Similarity Search.” Find all properties in Arizona that have the exact same vegetation fuel load vector as a neighborhood that burned down in California.
- Why AlphaEarth: The 64-dim vector captures dryness and biomass density better than simple RGB indexes like NDVI.
- Commodities Trading (Yield Prediction):
- Concept: Monitor soybean crop health in the Amazon during the rainy season (when optical satellites are blind).
- Why AlphaEarth: The Sentinel-1 radar fusion allows the embeddings to track crop growth stage even under 100% cloud cover.
- Utility Infrastructure Monitoring:
- Concept: Detect vegetation encroachment on power lines at a continental scale.
- Why AlphaEarth: The 10m resolution + LiDAR training context makes it highly sensitive to “canopy height” anomalies.
Conclusion: The “Clear Sky” Era
AlphaEarth Foundations isn’t just a better map; it’s a shift from observing pixels to querying physical reality.
For developers, the friction of remote sensing—atmospheric correction, cloud masking, sensor alignment—has essentially been “AI-abstracted” away into a clean 64-float vector. The question is no longer “Can we see the ground?” It’s “What specific question do you want to ask it?”
Leave a Reply