Making GIMI Actionable: What We're Learning in OGC Testbed-21
Introduction
A new imagery format doesn't automatically make its data usable. That's the gap Voyager is working to close in OGC Testbed-21, and the implementation details are where things get interesting.

Date
05.04.26
Author
Alex Bostic
Type
Insights
GEOINT Imagery Media for ISR (GIMI) solves a real problem. Unlike NITF and other formats it's positioned to succeed, it's built for modern workflows: still imagery, motion imagery, tiled imagery, and rich metadata in a single flexible container.
But even well-designed formats run into a practical obstacle: GIMI data often arrives as fragmented file structures with highly technical metadata, and connecting, interpreting, and validating that data manually is exactly the kind of friction that keeps powerful formats from seeing real operational use.
The Core Problem We're Solving
Inside the GIMI standard, a HEIF/HEIC imagery file and its ontology-based sidecar (an RDF Turtle, or TTL, file) normally exist as two disconnected files, linked only by a shared identifier.
The sidecar carries rich descriptive metadata — geographic footprints, lineage, provenance — but in raw form it's not human-readable, and there's no built-in mechanism to make the relationship between the two files visible to an analyst or a downstream system. That's the gap Voyager's contribution to Testbed-21 is designed to close.

What We're Actually Building
In support of OGC's advancement of the GIMI standard and the Imagery Domain Ontology, Voyager indexes HEIF/HEIC files and establishes the relationship between those files and their TTL sidecars, with a specific focus on lineage and provenance.
But before any of that indexing happens, the first step is configuring a pipeline for GIMI content. A pipeline defines how content gets enriched during indexing — what gets represented, how it gets transformed, and what metadata gets surfaced.
This matters because not all the information relevant to a decision is stored inside the GIMI files themselves. Voyager's pipeline can pull in important context from other systems of record and surface it alongside the imagery, so analysts get a complete picture without having to jump between systems unless it's absolutely necessary.
On the lineage side: Voyager makes the relationship between imagery and metadata visible at a glance. In the Voyager interface, the TTL appears as a component file of the format on the image's detail page — so analysts can immediately see what metadata is attached without reconstructing it from raw files.
On the provenance side: analysts can see exactly which metadata file backs a given image and trace any claim back to its source directly from that detail page. No manual reconstruction. No guesswork about where a value came from.
The metadata itself — which is hard for humans to read in raw form — gets surfaced in an intuitive, understandable way. Take the geographic footprint: it lives inside the TTL as structured coordinates, but Voyager parses them during indexing and draws the bounding box directly on the map.

Why this matters
The result is that analysts and decision-makers can find the right data faster, see what it's connected to, and validate where it came from — without necessarily needing direct access to the underlying files at all.
Not every user needs to work with GIMI directly. What they need is the key metadata and context that lives around it: the ability to understand content holdings across formats gives analysts faster situational awareness and supports better decisions earlier in the workflow.
The second half of the story is service-oriented exposure. The same indexed content is available through an OGC API – Records endpoint as well as STAC (Spatial Temporal Access Catalog), so external systems can query and consume using community-based standards.
The data isn't just easier to use inside Voyager — it's interoperable across the wider geospatial and non-geospatial ecosystems. And because Voyager is built on open standards and open source libraries, it can evolve alongside the industry as formats shift and legacy data accumulates. The workflow adapts to how content is actually used, rather than forcing analysts to adapt to how a format was designed, which is ultimately what increases both visibility and interoperability at scale.
The Problem We Keep Coming Back To
The geospatial community has spent years improving how data is structured and compressed. We're spending comparatively less time on the layer between data exists and data is usable — discovery, normalization, integration into live systems.
GIMI is a good example of why that matters. It's a well-constructed format with real advantages for ISR and EO workflows. But if analysts can't search it, can't tell what metadata backs an image, can't connect it to existing catalog infrastructure, those advantages stay theoretical. The Testbed's structure — prototyping against real APIs and real libraries, with real interoperability requirements — is specifically designed to surface those gaps before they become someone else's integration problem.
What We're Watching as Testbed-21 Progresses
A few open questions we're still sitting with:
How much of GIMI's metadata richness survives the translation into records that general-purpose discovery APIs can handle? Where are the lossy points?
How does performance compare to COG and GeoTIFF in real query and retrieval scenarios, and where does GIMI's flexibility create overhead that needs to be engineered around?
What does the IDO sidecar pattern reveal about how ontology-structured metadata and file-based imagery should relate to each other going forward?
What's Next
On the roadmap: content ID-based linking, video support, better thumbnail reliability, and broader support for community-built viewers.
But the most valuable thing you can tell us is: which metadata fields matter most to your workflow? That feedback shapes where this goes next.
If you're working on GIMI implementations, interoperability with OGC API – Records, or metadata normalization for EO data, we'd be interested in comparing notes.
Alex Bostic is Voyager's Principle AI Solutions Architect and AI Productization Lead
start a conversation

