🤖 AI Summary
To address the high cost and low efficiency of expert annotation in flood extent mapping, this paper proposes FloodTrace—the first web-based crowdsourcing system enabling non-expert users to efficiently annotate flood inundation areas. Our method introduces two key innovations: (1) the first lightweight adaptation and browser-side deployment of a topological image segmentation tool; and (2) a lightweight crowdsourced annotation aggregation framework with interactive correction, significantly improving annotation consistency. Experiments show that median annotation time per task is less than 50% of state-of-the-art methods; deep learning flood detection models trained on our crowdsourced data achieve performance comparable to those trained on expert annotations, while expert post-processing time is reduced by over 90%. This work establishes a new paradigm for high-accuracy, low-cost, and scalable remote sensing–based flood mapping.
📝 Abstract
Mapping the extent of flood events is a necessary and important aspect of disaster management. In recent years, deep learning methods have evolved as an effective tool to quickly label high resolution imagery and provide necessary flood extent mappings. These methods, though, require large amounts of annotated training data to create models that are accurate and robust to new flooded imagery. In this work, we present FloodTrace, a web-based application that enables effective crowdsourcing of flooded region annotation for machine learning applications. To create this application, we conducted extensive interviews with domain experts to produce a set of formal requirements. Our work brings topological segmentation tools to the web and greatly improves annotation efficiency compared to the state-of-the-art. The user-friendliness of our solution allows researchers to outsource annotations to non-experts and utilize them to produce training data with equal quality to fully expert-labeled data. We conducted a user study to confirm the effectiveness of our application in which 266 graduate students annotated high-resolution aerial imagery from Hurricane Matthew in North Carolina. Experimental results show the efficiency benefits of our application for untrained users, with median annotation time less than half the state-of-the-art annotation method. In addition, using our aggregation and correction framework, flood detection models trained on crowdsourced annotations were able to achieve performance equal to models trained on fully expert-labeled annotations, while requiring a fraction of the time on the part of the expert.