Allianz Re chief also examines hot spot accumulation monitoring for risk management

With urban density on the rise in areas often exposed to natural disaster – the UN estimates 70% of the world’s population will live in urban areas by 2050, many of these coastal – the management of natural catastrophe risk will endure as a key topic for the insurance and reinsurance industries.

While risk management can take many forms, including preventative environmental and structural measures which the industry should play an active role advising on and pushing for, there are gains to be made with improving exposure, and hazard awareness data.

A core risk management tool is the use of sophisticated probabilistic modeling to analyse the impact of a natural catastrophe on exposed assets. The outcome of such estimates assists in portfolio management, capital allocation and risk pricing. In the past, models were used more for long-term planning, but now play an important role also in short-term risk assessment. Modeling has become an industry standard relied upon by ratings agencies to assess the “natural catastrophe exposure” of an organisation and determine how an event would impact it’s solvency.

So given the ubiquity and sophistication of catastrophe modelling, why is the industry still sometimes caught unawares by the extent of losses after an event? What are the limitations of modeling? And what investments are needed?

Importance of data

Nowhere is the “garbage in, garbage out” adage more applicable than in the context of catastrophe modeling. Since model results can change substantially based on the input data, object granularity and overall accuracy of exposure information, it is only with high quality data you can expect reliable and useful results.

But while natural catastrophe risk models have increased in sophistication and data hunger, methods of capturing the required risk information have not always advanced at the same pace. Any factor which would theoretically influence the technical premium required for the analysis should be captured at the primary underwriting desk. If data is not collected or stored, it is not available for later risk assessment.

As the Association of British Insurers Industry Good Practice for Catastrophe Modelling points out, “Reinsurance poses specific challenges to data quality. Reinsurers often receive large amounts of data…[and] potentially hundreds of portfolios of varying levels of quality and provenance.”

Improving the quality of data for natural catastrophe risk assessment is of paramount concern to the industry and should be a focus area for investment. Large-scale projects to raise the standard of data being captured will reap dividends in the form of more useful and reliable model output. However, the focus should not be on simply increasing the amount of data collected – this may imply a level of accuracy that is not always real – it should rather be on increasing the quality of the data.

Where data quality is poor or inadequate, it is often better to run a simple analysis rather that a sophisticated one, which will fill in gaps with assumptions and estimates in the face of missing data. Comparing model output with historic loss data can also give a good indication of its quality.

Evolution of cat risk models

In the wake of each major natural catastrophe, scientific understanding around the event improves and developers correspondingly adjust their existing models. The models are therefore in a constant state of evolution, aided by the use of real post-event losses based on high-quality portfolio information. The more historical data available, the better the models become at predicting what will occur the next time a similar event strikes.

In 2011, when floods ravaged parts of Thailand, the region had not been on the radar of many risk and business managers. On top of the devastating floods themselves came the realisation that the affected regions and exposed assets had massively changed over recent years. Manufacturing companies had often rationalised operations to such a point that there was a global over-reliance on a handful of suppliers, and those suppliers were now out of action and underwater. The cost of business interruption ballooned and in many cases policies had included broad flood coverage. The loss amounts driven by foreign corporations’ “interests abroad” in Thailand were significant and somewhat of a surprise to the industry. The issue was once again - though not exclusively - data related.

Post 2011, the industry is now aware acutely of the importance of supply chain transparency, and while enforcing risk-adequate pricing is always a balancing act between market forces and internal model requirements, there is greater awareness around the connections that need to be investigated, and dependencies that need to be accounted for.

Limitations of cat models

Arguably as important as investing in data standards to be able to apply catastrophe models appropriately, is an awareness of their limitations. Cat risk models can never hope to capture every variable in a natural catastrophe. The effect of previous events – a good example is a milder earthquake weakening structures before a major one – and secondary factors such as soil liquefaction during an earthquake can materially impact the toll an event takes and the losses that stem from it, and can never be completely accounted for.

The hard numbers models output are vital, but they still need to be viewed by decision makers in the context of their professional judgment. To quote the 2013 Lloyd’s Market Association Catastrophe Modelling report: “it is in understanding the limits of a model that its value can be properly achieved.”

Monitoring risk hot spots

Where a high hazard accumulation intersects with a high value concentration, risk hotspots are identified and can be analysed.

At Allianz, in-house applications allow us to quickly build a picture of risk accumulation for the group in any defined location. These can be combined with our view of potential natural peril loss impacts to estimate a level of probable maximum loss (PMLs), for modeled and non-modeled scenarios. This provides an easy and intuitive view on the natural catastrophe risk in a specific region which is a starting point. Visualising the exposures makes this a powerful communications tool and basis for further analysis of the risk data and modelling with more sophisticated cat models.

Amer Ahmed is chief executive of Allianz Re