• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

ST7 September 2015 q6 ii

M

Michael_JM86

Member
Hi ActEd,
Please can you help me understand the following parts of the solution.
1) “The definition of a range of best estimates includes model and parameter error, but excludes process error”. Does include mean make an allowance for? If so, does it allow for model and parameter error by allowing the actuary to choose their own model and parameters, which could be incorrect? Why does it not include process error? I thought there was always some prices error.
2) “Bootstrapping captures parameter error and process error, but not model error”. How are parameter and process error capture? Is model error not included because bootstrapping uses the same model over and over again, so it doesn’t take into account alternatives and as such doesn’t take into account the model may be wrong? I understand the definitions of the errors but I am struggling to apply the concepts and determine whether they are present in a given situation.
Many thanks,
Michael
 
  1. “The definition of a range of best estimates includes model and parameter error, but excludes process error”
Imagine I were to sit down on my own in a private room and derive a best estimate. That would give an indication of the size of the reserves, but it would not give any indication of the uncertainty of the reserves. This is because a best estimate is only a single number, a point estimate.

Now imagine that 100 actuaries each come up with their own independent estimates. Each actuary might come up with a different estimate because they use a different model and a different set of parameters. So these 100 different results give an indication of model error and parameter error. In other words they “allow for model error and parameter error”. The wider the range, the greater the model error and parameter error.

However, since this range is derived from lots of individual best estimates, it gives us no indication of the amount of process error.

  1. “Bootstrapping captures parameter error and process error, but not model error”. How are parameter and process error captured?”
Broadly speaking, bootstrapping basically fits a model to the data, and then fits another model to the same data, and then fits another model to the same data, and so on. We’ll call each of these models a simulation.

Also very broadly speaking, every time it carries out a simulation it calculates a set of link ratios and these are the parameters of the model. So the bootstrapping process give you one set of parameters for each simulation.

In other words, not only do you end up with a range of reserve estimates, you also end up with a range of parameters. By analysing the variability in these parameters you can get an indication of the size of parameter uncertainty. This is what we mean when we say that “bootstrapping captures parameter error”.

Now let’s think about process error. Recall that this is the uncertainty arising from the inherent randomness of the underlying insured events. Well, the greater the process uncertainty the more volatility you’ll see coming through in your data. Therefore, the residuals of your model will be more volatile, and this feeds through into your bootstrapping so that one simulation will be very different to the next.

In other words, the bootstrapping process will give you a wide range of reserve estimates. This is what we mean when we say that “bootstrapping captures process error”.

Of course, it’s hard to separate how much of the range is due to process uncertainty and how much is due to parameter uncertainty.
 
Thanks Katherine, that response is very helpful.

So just to confirm my understanding at a very high level;

Model uncertainty is captured if a range of models are applied to a set of data. It is captured for a range of best estimates as each actuary can choose the method, but not captured for the bootstrapping technique as it is a single method.

Parameter uncertainty is captured by a range of best estimates as each actuary can choose the parameters when deriving their estimate, and it is also captured within the bootstrapping method as each simulation produces a new set of link ratios, I.e a new set of parameters.

Process error is captured when there is variability in the data. It is not captured for a range of best estimates as each actuary uses the same ‘static’ data source, but it is captured within bootstrapping as the base triangle is different for each simulation, due to inclusion of the sampled residuals.

Have I got it?!
Many thanks,
Michael
 
Oh, and another question related to this.

Just to clarify, when we talk about the ‘Parameters’ for quantifying reserve uncertainty, do we mean the assumptions such as the development pattern, initial expected loss ratio, inflation, etc?
 
Thanks!

I originally had it in my head the other way around, e.g if only one model is used to derive a distribution of reserves then it includes model uncertainty (as only one model is being used).

I will now remember that model uncertainty is quantified if we use a range of models to calculate the reserves (if I replace the word error/uncertainty with variability the concept seems to make more sense to me).

Thanks for your help Katherine.
 
Last edited by a moderator:
Back
Top