Our flood maps leverage multiple optical and radar satellites, and are globally developed and locally optimized.
Cloud to Street produces flood layers from various sensors of different imaging mechanisms and platform parameters. Currently, public satellites cannot provide informative images on a frequent daily or hourly basis to capture flood evolution. This leads to inevitable space and time gaps in remote sensing-based observations from a single sensor. To fill in the observation gaps spatiotemporally, Cloud to Street is developing and testing algorithms to fuse information from multiple sources, including satellite remote sensing, topography, and historical flood information.
Cloud to Street is committed to building the most accurate flood maps that modern science allows. As machine learning applications to remote sensing continue to grow more popular in the literature and, in many cases, outperform traditional remote sensing approaches, we have begun developing machine learning based flood detection algorithms.
Historical flood extent maps can be generated for Landsat, MODIS, and Sentinel-2 datasets. While the satellite record is not long enough to statistically determine long return periods (50-500 years) on its own, we can combine it with other data sources to map real events that represent specific return periods.
After a flood layer is produced, it is overlaid with cropland, road, population and critical assets layers. Critical assets that the project partner has requested for monitoring (e.g. schools, refugee camps, hospitals) are intersected with the flood layer at a defined buffer in order to determine assets that were flooded. These as well are reported at the administrative layer as the number of assets impacted per administrative unit. All impacts are displayed in maps or dashboards as Cloud Optimized GeoTIFFs (COG) or GeoJSONs and administrative impact summaries are reported in CSVs, tables, and GeoJSONs.