Making GIMI Actionable: What We're Learning in OGC Testbed-21

Introduction

A new imagery format doesn't automatically make data usable. That's the gap Voyager is working to close in OGC Testbed-21, and the implementation details are where things get interesting.

Stylized illustration with white line art on a dark background, depicting a network of polygonal landmasses, coastlines, and geographic features connected by dashed route lines

Date

04.16.26

Author

Alex Bostic

Type

Insights

GIMI solves a real problem. Unlike NITF and other formats it's positioned to succeed, it's built for modern workflows: still imagery, motion imagery, tiled imagery, and rich metadata in a single flexible container.

But the format being well-designed and the format being operational are two different things. Our contribution to Testbed-21 is focused on that second part: what does it actually take to ingest GIMI, make sense of its metadata, and surface it through the APIs that real systems depend on?

What We're Actually Building

A GIMI extractor/reader inside the Voyager platform. This parses GIMI files and their associated IDO ontology sidecar files, extracts imagery and structured metadata, and validates whether the format behaves the way the spec intends in a real pipeline. That last part matters: implementations under development reveal edge cases and ambiguities that review alone doesn't catch. We're feeding that feedback back into the broader Testbed community as we go.

Metadata normalization and record construction. Extracting metadata from GIMI is one thing. Turning it into records that other systems can actually use is another. We're aligning GIMI metadata with established schemas and models so that the data doesn't stop at the format boundary, it flows into discovery and integration workflows downstream.

Discovery through open standards. We're publishing metadata through an OGC API – Records implementation, with STAC and DCAT exposure alongside it. The goal is straightforward: GIMI data should be queryable and discoverable by systems that have never heard of GIMI. That's what makes a format viable at scale.

The Problem We Keep Coming Back To

The geospatial community has spent years improving how data is structured and compressed. We're spending comparatively less time on the layer between data exists and data is usable — discovery, normalization, integration into live systems.

GIMI is a good example of why that matters. It's a well-constructed format with real advantages for ISR and EO workflows. But if you can't search it, can't ingest it without custom tooling, and can't connect it to your existing catalog infrastructure, its advantages stay theoretical. The Testbed's structure — prototyping against real APIs and real libraries, with real interoperability requirements — is specifically designed to surface those gaps before they become someone else's integration problem.

That's the part we find most valuable about participating at this level. It's not just validation; it's the kind of friction that sharpens both the standard and the implementation.

What We're Watching as Testbed-21 Progresses

A few open questions we're sitting with:

  • How much of GIMI's metadata richness survives the translation into records that general-purpose discovery APIs can handle? Where are the lossy points?

  • How does performance compare to COG and GeoTIFF in real query and retrieval scenarios, and where does GIMI's flexibility create overhead that needs to be engineered around?

  • What does the IDO sidecar pattern reveal about how ontology-structured metadata and file-based imagery should relate to each other going forward?

We'll be contributing to and sharing out the final report when the Testbed wraps up. If you're working on GIMI implementations, interoperability with OGC API – Records, or metadata normalization for EO data, we'd be interested in comparing notes.

start a conversation

Prepare Your Data For What Comes Next

Prepare Your Data For What Comes Next