News Link
Storm in 1862 dropped 36" of rain on LA over a 45 day period. A similar storm would overtop and collapse the Whittier Narrows Dam and cause an estimated $750 billion in damages downstream.
This story highlights the problems with traditional statistical methods used to forecast rare storms. As a young engineer, I worked on the National Flood Insurance Program in Minnesota running the first hydrologic models to determine the 100 year flood plain in the 1970s. At that time, many of the small towns in Minnesota had only 30 or 40 years of weather data to forecast what the 100 year flood would be. It was done with total linear thinking, using a standard statistical analysis that assumed the 30 or 40 years worth of data was typical of the past 500 years and can be used to forecast the next 500 years. Not only is that assumption wrong but the entire presentation of the average condition (called normal) and the "once in 100 year flood" are completely misleading and incorrect. The average or normal is a rainfall or temperature that only happens 1 time in 100 with about 50% the time being lower and 50% higher. The bell curve shows the probability of a rainfall event expressed as the percent chance it will occur in any given year. The 100 year flood is the flood that has a 1% chance of happening next year, not the flood that happens once every 100 years. In rare circumstances the "100 year flood" can happen 2 years in a row or may not happen for 200 years. The only way to determine long term hydrologic cycles is the study of tree ring data correlated with river levels.
This same problem of linear thinking plagues the attempts to predict global warming and is one of the reasons that models predicting temperature changes since the 1990's have not been accurate.
No comments:
Post a Comment