Natural and man-made disasters lead to challenging situations for the affected communities, where comprehensive and reliable information on the nature, extent, and the consequences of an event are required. Providing timely information, however, is particularly difficult followin
...
Natural and man-made disasters lead to challenging situations for the affected communities, where comprehensive and reliable information on the nature, extent, and the consequences of an event are required. Providing timely information, however, is particularly difficult following sudden disasters, such as those caused by earthquakes or industrial accidents. In those situations only partial, inaccurate or conflicting ground-based information is typically available, creating a well-recognized potential for satellite remote sensing to fill the gap. Despite continuous technical improvements, however, currently operational, non-classified, space-based sensors may not be able to provide timely data. In addition, even high spatial resolution satellites (< 1m) are limited in their capacity to reveal true 3D structural damage at a level of detail necessary for appropriate disaster response in urban areas. Uncalibrated oblique airborne imagery, both video and photography, is typically the first data type available after any given disaster in an urban setting, usually captured by law enforcement or news agencies. In this study we address the use of video data for systematic, quantitative, and near-real time damage assessment, using video and auxiliary data of an in-dustrial disaster in 2000 in Enschede, the Netherlands, and of Golcuk, Turkey, acquired after the 1999 Kocaeli (or Marmara) earthquake. We focus in particular on texture-based damage mapping based on both empirical and more generic, geometric indicators. Data-specific attributes included color indices and edge characteristics, while the data-independent approach included rotation invariant Local Binary Pattern and contrast operators (LBP/C). In an earlier step of the project, an interface was created to allow the near-real time processing of video streams, and, depending on positional information encoded with the data, their combination with auxiliary data such as maps or pre-disaster image data. Here we further investigated the potential of the available imagery for 3D reconstruction of the disaster area. Correspondences between consecutive video frames were established automatically by feature point tracking and used for the estimation of the coordinates of the terrain points as well as the camera parameters. Furthermore, we quantitatively assessed the quality of the reconstruction based on the data available. The ultimate goal of the project is to establish a versatile processing platform that supports extraction of such information as well as damage mapping, but also partial 3D reconstruction and integration of pre-event GIS and other auxiliary data.
@en