They Thought They Were Catching Pokémon. They Were Building an AI.

In the summer of 2016, hundreds of millions of people picked up their phones and walked outside. They wandered parks, scanned storefronts, circled monuments, and pointed their cameras at everything around them, all in pursuit of a virtual Charizard.

Five hundred million people installed Pokémon Go within 60 days of launch. At its peak, the game drew around 230 million monthly active players. Even today, nearly a decade later, an estimated 50 million users remain active, with roughly 5.4 to 5.7 million playing daily as of March 2026.

What none of them knew, what Niantic never prominently disclosed was that the mapping was the point.

30 Billion Images. One Dataset.

Images and AR scans collected through Pokémon Go and other augmented reality apps have now produced a dataset containing more than 30 billion real-world images.

Niantic Spatial, an AI spinout formed in 2025, has turned years of mobile gaming data into what it describes as a high-precision world model of the physical environment. The company is now commercialising that work through a Visual Positioning System or VPS, that can determine a device’s exact location to within a few centimetres using only camera input and map context.

The secret to why this dataset is so valuable lies in how it was gathered. For each of the million-plus locations in the dataset, Niantic Spatial has many thousands of images taken in more or less the same place but from different angles, at different times of day, and in different weather conditions. Each image comes with detailed metadata that pinpoints exactly where in space the phone was, including which way it was facing, which direction it was moving, and how fast.

No mapping company deploying camera cars could have replicated this. Staged photography produces clean, uniform images. Pokémon Go players produced something far more useful: real-world images captured in rain, at night, around construction, past parked cars, and in constantly changing street conditions, millions of times, across thousands of cities.

From Pikachu to Pizza Delivery

On March 10, 2026, Niantic Spatial announced a partnership with Coco Robotics, a startup that operates small sidewalk delivery robots for food and groceries. Coco currently has about 1,000 suitcase-sized robots deployed in Los Angeles, Chicago, Jersey City, Miami, and Helsinki, and claims to have completed over 500,000 deliveries.

The problem these robots face is a familiar one in dense cities: GPS fails. In urban environments, satellite signals bounce off glass and concrete, causing position estimates to drift by tens of metres, enough to place a delivery robot on the wrong block or the wrong side of the street.

Niantic Spatial’s VPS solves this by replacing radio signals with vision. Instead of relying on GPS, the system determines a device’s location by analysing what its camera sees and comparing it against the company’s global image database. If the system recognises the environment, it can calculate position with very high accuracy.

Niantic Spatial CTO Brian McClendon put it directly: “We had a million-plus locations around the world where we can locate you precisely. We know where you’re standing within several centimetres of accuracy and, most importantly, where you’re looking.”

As Niantic Spatial CEO John Hanke explained, the connection between the game and the robots is more literal than it sounds: getting Pikachu to realistically run around and getting a delivery robot to safely move through the world turn out to be the same underlying technical problem.

The Data Collection Nobody Noticed

The dataset was not assembled by accident. It was engineered through the game’s design.

Pokémon Go required players to physically visit specific locations and interact with their surroundings through their phone cameras. Every time someone visited a PokéStop, fought at a gym, or completed a task, the game was recording visual data. The collection effort intensified in 2020 when Niantic added “Field Research” tasks, prompting players to scan real-world statues and landmarks in exchange for in-game rewards.

Players willingly scanned the physical world in exchange for in-game currency. The result was a self-reinforcing data engine that no amount of corporate investment in sensor fleets could have replicated at the same scale or speed.

Section 5.2 of Niantic’s Terms of Service grants the company broad rights over AR content uploads, stating it can use submitted data however it wishes and pass that freedom on to other entities. Players agreed to this upon installation. Whether most users understood what they were agreeing to is a different question entirely.

The Consent Problem

This is not the first time data freely contributed by users for one purpose ended up powering something quite different. Google’s reCAPTCHA tests, the ones asking users to click on bicycles or traffic lights, have long been speculated to train AI vision models. More recently, law enforcement has allegedly accessed or purchased user-generated content from the consumer mapping tool Waze.

The Pokémon Go case is arguably starker. The gap between “catching virtual creatures” and “training navigation AI for commercial robotics” is wide enough that many players would likely not have drawn the connection, even if they had read the terms carefully.

Agreeing to a Terms of Service and understanding what you are consenting to are different things. The real question is whether “you agreed to the Terms of Service” constitutes sufficient consent when the commercial application was not foreseeable to the people generating the data.

Niantic has not suggested any plans to share VPS data with law enforcement or other third parties beyond commercial partners like Coco. But a technology that can identify exactly where a photo was taken by analysing buildings and landmarks in it carries obvious implications that extend well beyond food delivery.

The Living Map

Niantic Spatial’s ambitions do not stop at Coco Robotics. The company’s longer-term goal is to maintain a global, shared geospatial model, a “living map” and expose it through an API to any robot, phone, or headset that needs to know exactly where it is. As Coco’s robots and other future partners traverse sidewalks and streets, their sensors contribute fresh observations that refine and extend the map continuously.

As mapping has evolved from 2D to 3D and into dynamic digital twin simulations, what is changing is the primary consumer of those maps. Increasingly, it is machines rather than humans.

The dataset that began as a byproduct of catching virtual creatures is now the foundation of a commercial spatial intelligence platform. The players who built it moved on years ago. The data they generated is still working.


The most valuable AI training datasets in the world are not being assembled in data centres. They are being built by people who have no idea they are building them one scan, one walk, one in-game reward at a time.


Sources: MIT Technology Review, Popular Science, TechSpot, TalkEsport, Awesome Agents, Parametric Architecture

Previous CopyCat: The Free Chrome Extension That… Next MacBook Neo Review: A Great Laptop,…