The accuracy conversation usually stops at 2D
When someone scopes a geospatial annotation project, the accuracy requirement is almost always stated in horizontal terms: "1 meter horizontal" or "sub-meter accuracy" or "matches the road centerline within X feet." That's the 2D position — where is the feature on the map.
Vertical accuracy — where is the feature in the third dimension, the elevation — is usually unspecified. Sometimes it's irrelevant: the consumer is a 2D map, the analysis is a planar overlay, vertical doesn't change the answer. Often, though, vertical matters and nobody asked.
When vertical matters and isn't specified, the annotations get delivered with whatever elevation the source imagery happened to provide. Sometimes that's accurate. Sometimes it's wildly wrong — orthophoto-derived "elevations" that are just ground level interpolated to wherever the pixel falls, or LiDAR point heights that were assigned to the wrong return.
This article is about when vertical accuracy matters, how we measure it, and what to ask for if you want it.
When vertical accuracy actually matters
Four common cases.
Asset management with vertical clearance constraints
Bridge clearances, overhead utility lines, traffic signal mounting heights, sign mounting heights. These all carry regulatory minimums (a bridge needs X feet of clearance for trucking, a stop sign needs to be Y feet above pavement). An annotation that says "stop sign at lat/lon" with no vertical doesn't tell you whether it meets the height standard.
Drainage and hydrology modeling
Culverts, catch basins, road grade. Whether water flows from point A to point B depends on the elevation difference between them, not their horizontal positions. A drainage annotation with wrong elevations gets a hydrology model that confidently predicts the wrong flow path.
Autonomous vehicle ground truth
Lane lines aren't just 2D — they're 2.5D on a sloped road. An AV that thinks a banked curve is flat will misjudge cornering speed. Annotation training data with consistent vertical accuracy across thousands of frames is harder to generate than horizontal-only data, and AV teams pay for it explicitly.
Permitting and zoning
Building heights, setback violations, view-corridor obstructions. Anything where regulatory compliance is height-based needs vertical accuracy. A solar permit for a rooftop array needs to know the roof's elevation profile, not just its outline.
How vertical accuracy is measured
The standard is RMSEz — root mean square error in the vertical direction, measured against ground truth control points. RMSEz of 15 cm means: on average, our annotations are within 15 cm vertically of the surveyed truth, with the standard caveats about distribution and outliers.
Federal mapping work follows the ASPRS 2014 Positional Accuracy Standards for Digital Geospatial Data, which specifies vertical accuracy classes (1, 2.5, 5, 10, 15, 20, 33.3, 66.7 cm RMSEz). When we deliver annotations with a vertical accuracy claim, we report which class we hit and the test data we used to verify it.
For projects without surveyed control points, we anchor against the best available authoritative source — USGS 3DEP LiDAR for natural terrain in much of the US, state-level LiDAR programs where they exist, or client-provided survey control where available. We document the source in the deliverable so downstream consumers can verify the chain.
What we do during the project
Four practical steps.
Source vetting
Before annotation starts, we check what vertical reference the imagery carries. Orthophoto-derived elevations are usually unreliable for individual features. Mobile LiDAR captures with surveyed GPS bases are usually reliable. Drone photogrammetry depends entirely on whether ground control points were established at capture.
If the source doesn't carry usable vertical, we say so explicitly. We don't make up elevations from interpolation — that produces confident-looking data that's wrong, which is worse than admitting we don't have the value.
Schema fields for vertical metadata
Every annotation in a vertical-accuracy project carries fields beyond the standard ones: elevation_value, elevation_source (LiDAR return, photogrammetry, surveyed control, derived from DEM), elevation_datum (NAVD88, ITRF, local), vertical_accuracy_estimate (per-feature RMSEz where we can compute it).
These fields are filterable. Downstream consumers who need high vertical accuracy can use only the features that meet their requirement. Consumers who only care about 2D can ignore the fields.
Cross-check against authoritative DEMs
Spatial QA includes a vertical check: each annotation's elevation is compared to the corresponding USGS 3DEP DEM value (or whatever authoritative DEM applies to the project area). When the annotation is more than 2 meters off the DEM without a known reason, it gets flagged for senior review.
Some reasons for being off the DEM are legitimate (the feature is above ground — a sign on a pole is 7 feet above the DEM and that's correct). The QA rule is: flagged differences require an explanation, not automatic rejection.
Report vertical accuracy with the deliverable
Every batch ships with a QA report. For vertical-accuracy projects, the report includes RMSEz against test points, the percentage of features within each ASPRS accuracy class, and the count of features with unverifiable vertical (with reasons).
We don't average horizontal and vertical accuracy into a single number. They're different and they fail differently. Reporting them separately keeps the deliverable honest.
Where teams underspecify
Three common gaps.
"Sub-meter accuracy" with no axis. Sub-meter horizontally is straightforward. Sub-meter vertically is much harder and much more expensive. The phrase by itself doesn't specify which.
No datum specified. NAVD88 and NGVD29 differ by up to a meter in some parts of the US. ITRF and NAD83 differ by centimeters in different places. If the project doesn't state which vertical datum applies, the annotations can be precisely captured against the wrong reference.
No QA budget for vertical. Spatial QA against DEMs takes time and engineering. Projects that don't budget for it end up with annotations whose horizontal accuracy is measured and verified, and whose vertical accuracy is whatever the source happened to provide.
Practical recommendations
If vertical accuracy matters to your downstream consumer:
- Specify it explicitly. RMSEz, threshold, and datum.
- Make sure your source imagery actually carries usable vertical data. If you're not sure, ask before you commit.
- Budget for vertical QA — usually 10-15% of the annotation cost for projects where it matters.
- Insist the deliverable report vertical accuracy separately from horizontal.
If vertical doesn't matter to you, save the budget. Just don't accept claims of "high accuracy" without knowing whether they cover the third dimension.
Got a vertical-accuracy project to scope? Send a sample of your imagery and your accuracy requirement. We'll come back with an honest assessment of what's achievable from that source and what the QA budget looks like.