Some of you may remember earlier this year we conducted an experiment to compare traditional mapping with ai-assisted mapping. Below is our summary of findings and the full report for those who may be interested. We hope this experiement will be the start of the conversation of how we can ethically and responsibly introduce AI augmented mapping workflows into HOT’s work in 2022.
Humanitarian OpenStreetMap Team, comparison of traditional digitizing of building features in OpenStreetMap of machine learning assisted building digitization
- Although most participants were new to AI-assisted mapping, the majority of participants were open and likely to integrate it into their workflows.
- For beginner mappers, AI-assisted mapping drastically increased their mapping speed, but had no significant effect on their quality
- For advanced mappers, after a small mapping slow-down, AI-assisted enabled more efficient mapping without impacting their quality
- Open models offer significant potential impact and value for humanitarian response
- More data created through AI-assisted mapping may exacerbate the ‘validation bottleneck’
In the last 10 years, the use of AI/ML in the geospatial sector has boomed. Private sector, academic and nonprofit organizations alike have been investing significant thought, time and resources into exploring and testing the potential and possibility of how AI/ML can augment and amplify current GIS workflows.
Unfortunately for the open mapping community, a ‘go fast and break things’ approach has done exactly that, often coming at significant cost to the project and the community. As a result, open mapping communities are reluctant to allow unchecked AI/ML to roam free in the world that is OSM - created and crafted by countless hours of dedicated human mapping.
As the future approaches, so too does the intersection of mapmakers and AI-augmented mapmaking. With new AI models and datasets being generated daily, the pressure builds to find a middle ground where AI can assist, augment and amplify the dedicated map makers in an ethical and responsible way that protects the quality, integrity and value of the map. Our experiment set out to leverage collective intelligence to seek a point of convergence, rather than collision.
By understanding key concerns of the community and carefully integrating them into experiment design, we explored an agreed set of assumptions that could be objectively tested. Stakeholders, users, contributors, technologists and map makers came together with the joint intention of finding a path forward, collectively.
We learned that AI can assist and amplify the efforts of mappers to produce more map data. However, this comes with a condition: AI-assistance amplifies the speed of map data creation, but does not significantly improve data quality (nor does it worsen it).
Amplifying the efforts of an early journey mapper who has yet to learn the importance of map data quality, will obviously create more data, but at beginner levels of data quality. This increases the workload of human data validators and as such should be carefully integrated alongside data quality education.
For advanced mappers we learned that new tools initially cost time, but not quality. Advanced mappers who have spent years refining their craft have to redirect well formed pathways. However, advanced mappers understand the importance of data quality and therefore prioritise producing quality data even if it means taking more time. They are less likely to fall for the temptation of accepting lower quality machine predictions for the sake of speed. For advanced mappers, mapping takes time and attention, two traits they were not immediately willing to defer to the machine.
Through the creation of an open model for gap detection/completeness, 510/Netherland’s Red Cross demonstrated that AI can be accessible to all, especially during times of disaster. This was practically demonstrated when an open model developed for this experiment was used to respond to a typhoon in the Philippines to predict the impact of the typhoon on the local population.
Our acceptability survey showed us that people are open. Open to trying out something new and open to adopting new ways of working. Open to experimenting and exploring and understanding how we can go slow and get it right. Together we benefited from collective intelligence and from community intelligence, which allows our community to take the results forward into the perceptions and assumptions that have been often held but rarely tested. This allows all actors in the community to find an ai-assisted road forward, together.
For the full design, methodology and results you can read the full study report here >
The full NESTA Collective Intelligence Report can be found here >
A final appreciation to both NESTA for the grant to fund this study and to all the volunteers who participated in our study mapathons!