Editor’s Note: New science and unexpected events like the 2011 M9.0 Tohoku earthquake can challenge established consensus views. This article is an update of an article originally published in 2009 in which Dr. Guin discusses why it is so important for scientists and engineers at Verisk to determine whether competing approaches are credible and how much weight to assign to emerging science before a consensus is reached.
Today's catastrophe models are more scientifically robust than ever before. Our understanding of the meteorology and geology of natural catastrophes is continually improving, as is the sophistication of the models we use to represent them.
For the layperson, there is sometimes a tendency to regard every new "discovery" or finding in the latest published paper as an inviolate fact. In reality, rarely is there ever a last and final word. Rather, science is a dynamic process in which researchers not only make new discoveries but also re-examine earlier knowledge and try to improve, build upon, extend or, on rare occasions, even reject it. For Verisk, a critical decision is when to incorporate new scientific theory as we continually balance the desire to keep our clients informed and models updated with the need to minimize disruption to clients’ processes and workflow.
Reanalyzing the Past
In the natural catastrophe realm, the reanalysis of old data using new techniques may give scientists a clearer picture of precisely how past events (e.g., hurricanes, earthquakes, or windstorms) unfolded. Similarly, integrating new data on an event or phenomenon with the old data can enable a more complete picture to be drawn. Armed with improved estimates of the frequency and severity of historical events, catastrophe modelers are better equipped to assess the probability of future such events.
The Working Group on California Earthquake Probability, for example, has been systematically reanalyzing historical earthquakes that occurred in the San Francisco Bay Area to develop a uniform and internally consistent catalog for subsequent probabilistic analyses. The project uses modern analytical algorithms to redetermine the location and magnitude (including formal uncertainty estimates) of all regional earthquakes greater than M3.0, extending back in time as far as the data allow. Paleoseismology—the study of sediments and rocks for evidence of powerful ancient earthquakes—is now routinely used to supplement historical and instrumental records.
UCERF3
Reanalysis is also at the heart of the new earthquake forecast model for California—the third Uniform California Earthquake Rupture Forecast (UCERF3)—developed by a multi-disciplinary collaboration of leading experts in 2014. Compared to its predecessor, it makes more use of physical models to construct seismicity models consistent both with the historical/instrumental earthquake catalog and paleoseismic, paleotsunami, and GPS data, while also acknowledging uncertainties. UCERF3 has been a key source of data for the update to the Verisk Earthquake Model for the United States to be released in 2017.
HURDAT
Another example is a National Hurricane Center (NHC) initiative called the Atlantic hurricane database (HURDAT) Reanalysis Project, which is systematically re-evaluating archived data on all known Atlantic hurricanes since 1851. HURDAT is a best track data set, and is in fact the world’s best known one. Best track data represent not only the “best” location (storm track) estimates for tropical cyclones throughout the lifetime of the storm, but also the intensity (central pressure and/or wind speed) estimates at each point along the track. The HURDAT Reanalysis Project’s analysis of all Category 5 hurricanes was completed in 2016 with a review of the 1969 Hurricane Camille. While the Saffir-Simpson Category 5 of the storm at landfall was not changed, importantly its maximum landfall wind speed was lowered from 190 mph to 175 mph. A 2004 reanalysis of Hurricane Andrew—the 1992 storm credited with giving rise to the catastrophe modeling industry—led to its elevation from Category 4 to Category 5.
Whether Camille's peak intensity was 190 or 175 mph or Andrew was a Category 4 or Category 5 has no effect on historical losses, and the re-evaluation of one or two storms from a historical record containing many hundreds has a small impact on model results. But the ongoing process of reanalysis contributes materially to the development of a rich stochastic set of events informed by the historical record, which—combined with scientific expertise and robust statistical modeling techniques—is critical for the development of robust catastrophe models.
Tohoku Earthquake
Sometimes it is very recent events that lead scientists to rethink catastrophe risk, as in the case of the massive M9.0 Tohoku earthquake in 2011. The magnitude of this quake was far greater than seismologists had thought possible in this location, and the new data acquired led to a comprehensive reappraisal of the region’s geology and earthquake risk. The Tohoku earthquake has taught us that we cannot rely solely on consensus views, for in this part of Japan the scientific consensus was wrong. Therefore, a catastrophe modeler has to consider the most recent (sometimes still unpublished) observations, the latest scientific research, as well as established consensus views, and account for the many sources of uncertainty inherent in our understanding of how nature behaves.
Assessing Catastrophe Risk in a Changing Landscape
Because catastrophe models cannot rely on extrapolating exclusively from past experience, scientists must develop models that are inevitably characterized by considerable uncertainty. Since the advent of catastrophe modeling, the focus has been on assessing risk in the current environment. Things become more complicated, however, if the current risk environment is distinctly different from what it has been historically.
Scientific debate becomes even livelier when the risk landscape itself may be changing—as in the case of the evolving phenomenon of climate change. Because the natural climate variability is so large, detecting a clear signal in the occurrence of extreme events, such as hurricanes and tornadoes, that is attributable to climate change will remain a challenge—particularly when it comes to assessing risk on a regional rather than global scale. In the meantime, as our models get updated every five to six years, we continue to incorporate the most recent historical data and thereby implicitly account for any impacts of climate change that have already occurred.
Take tropical cyclone activity, for example. Whether or not tropical cyclone activity is being impacted by climate change, the two most intense storms recorded in the Pacific, Typhoon Haiyan and Hurricane Patricia, have both occurred since 2013. And since 1995, tropical cyclone activity in the Atlantic Basin has been elevated over the long-term (climatological) average. Scientists at the National Oceanic and Atmospheric Administration (NOAA) have linked this above-average activity to elevated sea surface temperatures (SSTs), which are in turn linked to the positive, or warm, phase of a naturally occurring cycle that oscillates over decades, the Atlantic Multidecadal Oscillation, or AMO. Before the 11-year U.S. major hurricane “drought” that began in 2006, it seemed reasonable to assume that because SSTs were higher, hurricane losses would be similarly elevated, and models should adjust accordingly.
However, there are significant problems with this argument. One is that within any given period of time there are a number of climate signals other than the AMO that influence Atlantic hurricane activity and that may indeed dominate and even counter its impact. Another reason for circumspection is that the primary focus to date of scientific investigation into climatological influences on tropical cyclones has been on basinwide activity. Making the leap from increased hurricane activity in the Atlantic to increased landfall activity and, ultimately, to the effect on insured losses requires significant additional research before radical changes are made to the model methodology that has provided the industry with reliable results for three decades.
The AMO’s transition between positive and negative phases can be very rapid, and one can argue there are signs that it may now be trending to a cold period. Compared with enhanced activity, developing a view of risk that is below average activity is equally fraught with uncertainty. This is why Verisk is engaged in research that models landfalling hurricane rates as a function of multiple climate signals.
A Measured Approach to Incorporating the Most Advanced Science
It is the job of scientists to investigate and posit theories to explain physical phenomena. Competing theories nourish scientific debate, but arriving at a well-researched and robust model can be a lengthy process. Catastrophe model users expect rigorous, state-of-the-science models from us, but on occasion it is appropriate to resist the temptation to fervently embrace the latest findings, knowing that the investigation is still in its preliminary stages. In accordance with this more measured approach, Verisk is continuing its research into the relationship between climate signals and hurricane landfalls and developing a next generation of models that will account for correlations across regions and perils.
The most important job of the scientists and engineers at Verisk is to keep abreast of the scientific literature, evaluate the latest research findings, and conduct original research of our own to determine whether competing scientific approaches are credible and how much weight to assign to them. It is our responsibility to review all views, form scientific opinions, account for uncertainty, and test the models with data. Ultimately, Verisk is committed to bringing to market not only the most advanced science, but also the most reliable models.