How we research, test, and update viewmygpx
viewmygpx is written by people who use GPX files every week and who get annoyed when a guide page is wrong. The methodology below is the discipline we apply to make sure pages on the site stay reliable — what counts as a primary source, how we test platform behavior, what the test corpus looks like, and what we do when a platform moves the button. Nothing here is theoretical: it's the actual process we follow before publishing or revising a page.
Source-of-truth hierarchy
For any factual claim on the site, we work down a fixed hierarchy until we hit a source that can answer the question:
- The format specification. For GPX, that is the GPX 1.1 schema and reference manual published at topografix.com/GPX/1/1. For KML it is the OGC standard at docs.ogc.org/is/12-007r2/12-007r2.html. For GeoJSON it is IETF RFC 7946. For coordinate datums it is the EPSG registry. For time, ISO 8601. For media types, the IANA registry. The spec is the final word on what the format does and does not allow.
- Published vendor schemas. For Garmin extensions (TrackPointExtension v1, GpxExtensions v3, the activity-extension family), we cite Garmin's own published XSDs at garmin.com/xmlschemas. When Wahoo, Suunto, COROS, or Bryton write the same fields, we treat them as following Garmin's published schema unless their own documentation says otherwise.
- The platform's own help center. For third-party platforms — Strava, Komoot, Garmin Connect, AllTrails, Ride with GPS, Gaia GPS, Google Maps, Apple Maps, OpenStreetMap — we link to the platform's own support article. If the platform says "upload a GPX file via this page" in their docs, we quote them and link to their docs.
- Hands-on testing. When the spec or the platform documentation does not answer the question (often it doesn't — edge cases like "what does Strava do with a track that has duplicate timestamps" are not in any FAQ), we run the test with a real account on a real device.
- Definitional context. Wikipedia is acceptable as a starting point for definitional context (what GPS is, what GPX stands for, who TopoGrafix is) but we don't cite it for any technical claim that has a primary source above it.
If a claim cannot be supported by sources at this level, we either rewrite it into something verifiable or we cut it. We don't publish "Strava might do X" or "some users have reported Y" — those are guesses dressed as facts.
Sources we explicitly do not cite
- AI-generated content with no human verification. AI is fluent at producing plausible-looking technical claims that are wrong in ways that are hard to spot.
- Random forum posts, Reddit answers, or Stack Overflow comments as standalone evidence. They are useful as a starting point — somebody hit the same edge case — but the answer to a question about a format goes to the spec, and the answer to a question about a platform goes to the platform's docs or our own test.
- Marketing copy from the platform we're documenting. Marketing says what the platform wants to be true; documentation and testing tell us what the platform actually does.
- Other GPX viewer/converter websites. We don't cite our peers as authority on the format or on platform behavior. Each site is responsible for its own facts.
How we test platform integrations
For every "Open GPX in [platform]" page, the steps were actually executed before the page went live. The author drops a sample GPX file from our published corpus, walks through the platform's upload flow on a real account, and notes the observed behavior — what fields survive, what fields are stripped, whether timestamps are preserved, whether elevation is recomputed, whether the route renders as a track or a course or a segment. Screenshots come from those test runs.
The configurations we test against are the most common ones our readers hit. For desktop flows: latest stable Chrome and Firefox on macOS and Windows. For mobile flows: latest stable iOS Safari and Android Chrome. Where a platform behaves differently on a less-common configuration — for instance, the Garmin Connect Web upload flow differs from the Garmin Connect mobile app — we say so explicitly and link to the platform's own variant documentation.
Behavior on premium tiers is tested with a real subscription where feasible. Where a feature is gated behind a paid plan we don't currently hold, we say so and rely on the platform's documentation rather than guessing.
The test corpus
We maintain a corpus of GPX files we use to verify content claims and regression-test the viewer, editor, and converters. The published sample set at /sample-gpx-files/ is the public face of it; the working corpus is larger. The variations cover the matrix of cases that a real upload may contain:
- GPX version. Both 1.0 (older Garmin handhelds, hand-rolled exports from older tools) and 1.1 (everything modern). 1.0 differs from 1.1 in metadata structure and namespace; both must parse cleanly.
- Schema validity. Some files declare a schema and some do not. Some declare an extension namespace they never use. Our parser handles all of these the same way the spec describes — it tolerates declared-but-unused extensions and accepts known extensions even when undeclared.
- Garmin extensions. TrackPointExtension v1 (heart rate, cadence, temperature, water-temp, depth), GpxExtensions v3 (waypoint icons, postal addresses, phone numbers), and the older GpxExtensions v1 still produced by some legacy tools.
- Track structure. Single-track files (most common), multi-track files (multi-day trips, segmented hikes), files with multiple segments inside a single track (recordings paused and resumed), and pure-waypoint files (geocaching pocket queries with no track at all).
- Missing or sparse data. Files with no elevation (planning routes from web tools), files with no timestamps (planned routes, hand-edited files), files with timestamps but no time-zone information, files with extreme coordinate precision (16 decimal digits), and files with sparse trackpoints (10 m apart, 100 m apart, 1 km apart).
- Edge geographies. Files near the equator, near both poles, crossing the antimeridian (180° longitude), and on either side of the prime meridian. These cover the common bugs in coordinate math, datum handling, and great-circle distance computation.
- Datum and elevation source. Files written with WGS-84 ellipsoid heights versus mean-sea-level heights, with and without the elevation correctly labeled. The spec says GPX uses ellipsoid heights; many devices write MSL heights and call them ellipsoid. The corpus includes examples of both so we can be honest about what shows up in the wild.
New content is verified against this corpus before publishing. When a new edge case shows up — usually because a reader emailed us a file the viewer mishandled — that file is reduced to a minimal reproducer and added to the corpus, and the viewer or editor is fixed against it. The corpus grows; we don't shrink it.
How we verify the format pages
The pages that explain the GPX format itself — /gpx-file-format/, /what-is-a-gpx-file/, and related pillars — are written against the GPX 1.1 schema as the primary source. Every claim about the file structure (which elements are required, which are optional, what the cardinality is, what types each child accepts) traces to the schema. We use the GPX 1.0 schema for variations specific to that older version.
Where the schema is silent or where the spec defers to convention — for instance, the spec does not mandate whether time should be UTC, and many tools write local time anyway — we say so explicitly, document what the tools we tested actually do, and recommend a portable choice (UTC with a Z suffix) without claiming the spec requires it.
Conversion claims (GPX↔KML, GPX↔CSV, GPX↔GeoJSON, GPX↔KMZ) are verified by round-tripping files through both directions of the conversion and comparing the original and the round-tripped output. Lossless round-trips through the converters are tested as part of the build. Where lossy conversions are unavoidable (KML adds styling we drop on the way back; CSV flattens hierarchical structure), we document what is lost.
How we handle conflicts between sources
Platform documentation sometimes lags or contradicts platform behavior. The spec sometimes contradicts what the most common tools actually do. When sources conflict:
- Spec vs platform reality: we describe both. The page says what the spec requires, then notes which common tools violate it and how. Readers need both views — the spec to write portable files, the reality to debug imports.
- Platform docs vs platform behavior: we trust our own test. If the help center says Strava preserves heart-rate data and the upload strips it, we describe what we observed, with the file we observed it on, and link to the help center for completeness.
- Two specs disagreeing: we say so. KML 2.2 OGC-standard versus KML 2.3 versus older Google Earth practice are not the same thing; if a claim depends on which one the reader encountered, we name the version.
How we use AI
AI tools are part of the research and drafting process. We use them to summarize long platform documentation, generate first-draft outlines, suggest topic structure, draft alt-text candidates, cross-reference our notes, and write code-review-style critiques of our own drafts. None of that work appears on the site without human verification.
We do not publish AI-generated copy unedited. Every published page is read end-to-end by a human, every technical claim is checked against the source-of-truth hierarchy, and every step in a how-to is executed on a real account before the page goes live. AI is a tool; the editorial responsibility is ours. The editorial policy goes into more detail on how we draw that line.
How we date and update pages
Pages on viewmygpx do not display a "last updated" line in the body, partly to keep the page chrome quiet and partly because platform changes often touch dozens of pages at once and we want readers to focus on the content. The XML sitemap publishes the machine-readable last-modified date that search crawlers use, and the git history of the site is the authoritative audit trail.
Triggers that cause us to revisit a page:
- A reader email reporting a step that no longer matches the platform UI. These are addressed first.
- A failed routine spot-check — every few months we re-run the steps in the most-trafficked "open in" guides to catch silent UI changes.
- A platform release note we read about in the wild. When Strava announces a routes-API change or Garmin Connect retires a feature, the affected pages get a pass.
- A spec update. The GPX 1.1 spec is essentially frozen, but adjacent specs (KML, GeoJSON, ISO 8601) do see revisions, and when they do, we revisit the relevant pages.
When we update a page, we update the linked help-center URLs at the same time — broken or redirected links to platform docs are a small but real source of friction.
Review process
Every published page is read by a second person before it goes live. The reviewer's job is concrete: catch factual errors, catch broken or outdated links, run any "how to" steps independently on a real account, verify that the test corpus claims are reproducible, and flag writing that is unclear or buries the answer.
We don't ship a guide that nobody but the writer has verified. That is true even when the writer is the most experienced person on the topic — second eyes catch things first eyes don't.
How we handle corrections
Factual corrections are treated as bugs. When a reader emails us with a correction, the page is reviewed against the source-of-truth hierarchy, the page is fixed if the correction is right, and the change ships immediately. The full corrections process — when we add a visible note to the page, when we silently fix a typo, what kinds of changes are considered material — is documented at /editorial-policy/.
Send corrections to hello@viewmygpx.com with the URL and what we got wrong.
What we don't do
- Paid placements. No platform pays us to recommend it. No platform writes our copy. If a vendor approaches us with a paid placement offer, the answer is no.
- Affiliate-driven recommendations. If we add affiliate links to gear in the future, those links will be clearly disclosed, limited to products we actually use, and never the reason a recommendation appears on the page. Our editorial choice comes first; affiliate revenue, if any, follows from products we would have recommended anyway.
- Comparison-table chartmanship. When we compare platforms or formats, we describe behavior on the most common configurations and link to the platform's own documentation for the variant. We don't cherry-pick the configuration that makes one option look better.
- Hidden subjective claims. When we say something is "simpler" or "more reliable" or "better", we mean it about a specific scenario, and we name the scenario. Generic best-of language does not make the cut.
Questions
If you have a question about how a specific page was researched — which spec section we relied on, which platform documentation we cited, what we tested — write to hello@viewmygpx.com with the URL and the question. We are happy to show our work.