Last week I posted an update from Robert Soden about a research project that HOT is conducting on assessing levels of damage after a disaster. Today, as part of a complementary initiative, we’re launching an experimental campaign on crowdsource-mapping of the actual extent of impacted areas. Join us!
This research project is funded by the European Space Agency (ESA) as part of Crowd4Sat, an initiative led by Imperative Space. HOT is partnering with the International Institute for Applied Systems Analysis (IIASA), based in Austria, to implement a test pilot using satellite imagery acquired after Hurricane Matthew, a devastating tropical cyclone that hit large areas of Haiti in October 2016. The pre and post event imagery for this experiment was kindly provided by DigitalGlobe through the Open Data Program.
As Robert pointed out, rapidly assessing damage is critical to guide immediate response and to support recovery. With our partners at IIASA we want to further explore how quickly a crowdsource approach through a mobile phone application can provide that critical “damage footprint” snapshot in 24 to 48 hours after a disaster. The output of this microtasking activity is a GIS polygon layer that can be used directly in the HOT Tasking Manager to guide urgent pre-event baseline mapping of affected areas. Provided that clear imagery is captured and made available immediately after a disaster, this approach will allow us to quickly prioritize mapping in OSM. This is a very similar approach to the Mapswipe workflow for narrowing down areas to map in the Tasking Manager, but it’s applied to a rapid response scenario.
We expect that once operational, this approach will work in tandem not only with the Tasking Manager, but also to automatically feed the same damage extent area to the tools for estimating levels of damage that we described last week and that HOT is developing with Stanford University and other partners.
We are truly excited to be collaborating with world-class research groups on these projects and we look forward to applying the results in practice to support our partners in the field. Please join us in this first experiment and head over to the Picture Pile application developed by IIASA. For those of you familiar with Mapswipe, this app is built on the same idea of crowdsourcing tasks through simple swiping or tapping on a mobile phone screen. Feel free to contribute for as long as you like, and please share any feedback in the HOT Slack #general channel.
Here are some quick steps to get started:
- Go to http://geo-wiki.org/games/picturepile
- Signup with an account or contribute as a guest (box in the upper right corner)
- If you signed up with an account, you can also choose to use one of the two apps available for your smartphone (Android and iOS)
- Complete the quick training set to get familiar with the app and the task
- Start sorting pictures for the first training set available for Hurricane Matthew
Picture Pile is a cross-platform application that is designed as a generic and flexible tool for ingesting satellite imagery for rapid classification by volunteers. Initially developed to crowdsource deforestation mapping using very high resolution satellite imagery, now it has been adapted to support rapid post-disaster damage mapping.
The application provides simple microtasks, where volunteers are presented with satellite images and asked a simple yes/no question. A “before” disaster satellite image is displayed next to an “after” disaster image and the volunteer is asked to assess whether there is any visible, detectable damage to building structures. Users can then quickly classify the images by swiping them to the right, left or down to indicate their answer respectively for yes, no or maybe, thereby efficiently completing the microtask.
Picture Pile, as many other other microtasking applications, has the goal of providing rapid analysis of massive amounts of data in a short time. Having one single image analyst inspect and map all impacted areas after large natural disasters could take a long time and could be subject to interpretation errors. By splitting imagery into small tiles and spreading the effort across many volunteers, the same areas can be done in a fraction of the time, even with repeated assessments (three as the minimum) by multiple participants.
This approach could be further scaled by creating an interoperable tasking language that works across different microtasking applications. Part of these research projects is also focused on exploring how to define this common language and standard formats for the various applications that HOT and partners are using. If you are interested in the topic, please get in touch, we want to hear your ideas!