Mappers in Kenya Test fAIr's Training Workflow

Posted by Pauline Omagwa • Dec. 17, 2025

IMG_3845

In December 2025, twelve OSM Kenya mappers spent two days building and testing their own feature detection GeoAI models using fAIr, HOT’s open-source tool for training machine learning models on satellite and drone imagery. The workshop revealed both what's possible when communities control their own model training and what constraints they face in deployment

Day One: Understanding the Training Workflow

Mappers started by testing existing Geospatial AI (GeoAI) models created by other community members in different locations. They mapped using these models, gave human feedback on quality and accuracy, and experimented trying different models on different imagery and the same model across different imagery types.

They learned how to build their own GeoAI model, which included some manual mapping efforts. Participants created their first models and training datasets by the end of the day. one, though the datasets weren't well-annotated yet. The focus was understanding how the training workflow operates.

Day Two: Improving Model Accuracy Through Iteration

The second day started with finalizing the GeoAI models and doing qualitative evaluation. Models had been created initially with weak datasets. Mappers then increased the quality of their datasets and retrained their models to enhance accuracy. Accuracy increased for most models based on the enhanced training datasets. This sparked independent experimentation. Mappers started asking, "Can I use different model types?" They created new GeoAI models on the same dataset but tried different base models, comparing results to see how to increase accuracy.

Testing the Complete Workflow

The final exercise tested fAIr's performance under challenging conditions. Using a pre-created model, the group generated building data for Mapai, Mozambique, using satellite imagery, then attempted manual conflation into OpenStreetMap using the JOSM editor. Model productions didn't substantially improve on existing OSM data in this context.

The determining factor was imagery resolution. "The predictions were not very good quality because we were not able to use drone imagery," explained Omran Najjar, who facilitated the workshop. "We used satellite imagery; we didn't use a drone. We don't have a drone for those locations. However, wherever we had drone imagery and we localized the model, the accuracy was much better."

The exercise demonstrated that while mappers could successfully build technical skills and create quality training datasets, model performance depended on imagery quality, often a resource constraint in areas where mapping is most needed.

Mappers experimenting with GeoAI models with Omran Najjar during hands-on sessions in Nairobi. Photo: Pauline Omagwa

What Mappers Planned to Apply

Despite these limitations, all twelve participants indicated they would start using fAIr immediately or within months in a post-event survey. Their intended applications: * Disaster response mapping where speed was critical * Validation and quality assurance of existing databases * Large-area mapping projects * Reducing time on repetitive building digitization

"This would help minimize my mapping time, enabling me to contribute to more projects," one mapper noted. Self-reported understanding of the tool averaged 4 out of 5, indicating a strong grasp of the training workflow.

![Mappers testing fAIr’s GeoAI models on satellite/drone imagery. Photo: Pauline Omagwa]((https://swoon-hotosm-staging.s3.us-west-1.amazonaws.com/images/IMG_3904_1.original.jpg)

Next Step: MapSwipe Integration

The workshop previewed fAIr's upcoming MapSwipe integration, part of the 2026 roadmap. This will address a key workflow gap, how to take predictions from fAIr into OpenStreetMap through distributed community validation rather than direct import. A follow-up workshop is planned for next year to test this integrated validation workflow once the MapSwipe integration is ready.

What the Training Workflow Revealed

The Nairobi workshop demonstrated that mappers could successfully learn custom model training creating training datasets, building GeoAI models, and iteratively improving accuracy through dataset refinement. The experimentation phase showed that, given the workflow tools, communities would independently test variations and conduct their own research into what improved results.

But the workshop also clarified deployment constraints. Model performance depended fundamentally on imagery quality. As one participant identified, "Scaling models trained using high-resolution imagery to map areas with low-resolution images" remained a key challenge. Communities could master the training workflow, but effectiveness depended on drone imagery availability, a resource that often didn't exist where mapping was most needed.

Share

facebook-logo twitter-logo

About the information we collect

We use cookies and similar technologies to recognize and analyze your visits, and measure traffic usage and activity. You can learn about how we use the data about your visit or information you provide by reading our privacy policy.

By clicking "I Agree", you consent to the use of cookies.