Posted by Sajjad Anwar • July 10, 2019
None
The Humanitarian OpenStreetMap Team is partnering with Development Seed, Facebook, and Microsoft to build AI-based mapping tools to empower mappers around the world. We believe this will help volunteers to make sure their time is utilized well, and improve the quality of the map where it’s most needed.
As a member of HOT and Development Seed, I’m excited to share some of our work over the last few weeks. We completed a key piece of the AI-Assist pipeline called the “ML Enabler.” The ML Enabler, as the name suggests, enables applications to take advantage of Machine Learning. It organizes and efficiently stores ML-derived map data so Tasking Manager and other tools can draw on this information via a standardized API and enhance their own functionality. You have already seen some examples of how Tasking Manager and iD would use ML-derived data in a previous blog post. With the ML Enabler, we hope to bring more models to support project managers plan task areas better, and help mappers add new map features faster.
Why another API?
There are some key problems that the ML Enabler attempts to solve:
The ML Enabler solves the above through a standard set of APIs, and a command-line utility called the ml-enabler-cli. The API and the CLI are extensively documented in their respective repositories, but we will do a quick walk-through and how Tasking Manager will implement these soon.
One of the fundamental assumption ML Enabler makes is that a model is hosted behind an API. For example, we’ve integrated Development Seed’s Looking Glass building area predictor model. Looking Glass is packaged using TensorFlow Serving. Anyone can run this model on their computer or on the cloud using Docker. The API accepts satellite imagery tiles and responds with prediction for that particular tile. The ML Enabler CLI turns any arbitrary bounding box into tiles and sends them directly to the hosted looking-glass API. The aggregator utility takes the predictions, aggregates to the given zoom level, compares the building area with OpenStreetMap, and prepares a JSON payload that can be submitted to the ML Enable API.
ml-enabler fetch_predictions \
--name looking_glass \
--bbox "-77.04, 38.88, -77.01, 38.91" \
--endpoint http://192.168.1.3:8501/v1/models/looking_glass \
--tile-url https://api.mapbox.com/v4/mapbox.satellite/{z}/{x}/{y}.jpg?access_token={token}' \
--token <token_here> \
--zoom 18 \
--outfile /tmp/looking_glass_output.json \
--errfile /tmp/looking_glass_errors.json
Command to fetch predictions from looking-glass for a bbox
The API stores predictions indexed by quadkeys. The predictor and aggregator can decide what tile size makes the most sense. For example, looking-glass prediction is best at z18, but these predictions can be aggregated to z16 for ease of storage and retrieval. The tile is converted to the quadkey before posting the API. A key advantage of using quadkeys is its binning strategy. Practically, this allows us to fetch all predictions for an arbitrary bounding box extremely quickly and even for large datasets. A typical use case from Tasking Manager is to get building area for a polygon while creating a new project, so this can help the project manager determine what priority areas are for mapping. The API already has functionality to augment the Tasking Manager project geojson with prediction data stored in the Task Annotation API.
Future work
It’s relatively straightforward to integrate a new predictor and aggregator. At the moment, we’ve integrated looking-glass, and Microsoft’s building footprints API. We hope you’ll try integrating your models and find this API useful. In the near future, we’re looking to add geometry storage support so ML Enabler can interface directly with tools like iD. Storing the docker information about each model allows us to build a job management system to run the model containers on AWS and fetch predictions automatically for new areas. This in the future would allow us to scale models as necessary, and update the map as accurately as possible.
We use cookies and similar technologies to recognize and analyze your visits, and measure traffic usage and activity. You can learn about how we use the data about your visit or information you provide by reading our privacy policy.
By clicking "I Agree", you consent to the use of cookies.