Flood Risk Modeling Aided by Machine Learning Techniques
Using a Treed Gaussian Process for a Case Study in Charleston, South Carolina
More Info
expand_more
Abstract
Compound floods, which can be attributed to different drivers (pluvial, fluvial, surge, tide, and waves), generate a larger flood hazard when drivers co-occur than when they occur in isolation of each other. Current compound flood risk assessments are affected by a curse of dimensionality, where a larger number of events need to be numerically simulated to understand the response of risk to drivers. This research aims to create a methodology that improves the quantification of compound flood risk by using a Treed Gaussian Process (TGP) for the case study of Charleston. A TGP can actively learn from the response of damages to drivers to reduce the number of events that need to be simulated. By comparing this approach with a state-of-the-art approach, the research shows a reduction of the computational cost by a factor of 4, an improvement in the root mean square error by a factor of 8, and an improvement in the estimate of Expected Annual Damages (EAD) by a factor of 20. This reduction in computational cost allows for the inclusion of random variables that are normally assumed constant such as the duration and time lag of drivers. A sensitivity analysis demonstrates these variables produce a statistically significant difference in the estimate of EAD, which increases its value from 172 to 219 Million USD. The research also shows the combination of events caused by drivers leading to extreme damage changes when including these additional random variables, although surge is always found to be dominant. By applying the TGP to multiple outputs, the research demonstrates the TGP is not only applicable to the case study, which shows a TGP can be implemented in current flood risk assessments.