SR dropped in to catch the final session of a modelling analysis workshop being run as part of the CAT Risk Management & Modelling Australasia conference, which was held in Sydney last week.
Dr Will Gardner was facilitating a discussion on the relative value of cat models, following model demonstrations by organisations such as EQECAT, AIR Worldwide and Risk Frontiers.
Dr Gardner, who has been working in the reinsurance industry and building catastrophe models since 1994, set the tone by employing the following quote by the early 20th century statistician George E. P. Box: “Essentially, all models are wrong, but some are useful”.
Gardner argued that good models should be based on science and engineering as well as actual loss experience, and that they needed components to represent the ‘unknowns’. Furthermore, a realistic quantification of the error surrounding these unknowns was central to the real-world efficacy of models.
“Everyone underestimates the uncertainty because, with cat modelling, we are often setting assumptions which are based on limited data but can have significant impact on the model results, especially for extreme events,” he said. “So much so that with PML [probable maximum loss] curves, the range of error is much bigger than people think. I think it’s a worry that the whole industry doesn’t appreciate the level of error in the models.”
Dr Gardner went on to suggest that blending models with actual historical data is generally a good way for the insurance industry to approach cat modelling, whether it be for ascertaining individual location risk, PML estimation for regulation or reinsurance purposes, or simply for portfolio optimisation.
And it should all be kept relatively simple. One reason for this is: “Models need to be easily understood by non-technical board members. Boards are usually really smart people, but they can’t get their heads around PMLs.”
But another, perhaps more fundamental, issue is the possibility that the complexity of some of the newest technologies might have a negative impact on the accuracy of models. “Nothing I have seen in DFA [Dynamic Financial Analysis] modelling adequately captures the uncertainty in models,” Dr Gardner said. “Additional features and numbers might help, but the added complexity can create more possibility for error.”
All those present in the room seemed to agree that, despite its limitations, cat modelling would only grow in influence in years to come. The question is, is the growing complexity of the models we use decreasing or increasing the potential for error?
Or, to put it another way – and with apologies to Gilbert and Sullivan – how much can we really rely on the very model of a modern major catastrophe?
Yours in risk,