• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Experience analysis

M

MLC

Member
Hi,

When conducting experience analysis, assuming a sufficient volume of data is available to give credible results, how far down would you subdivide your data in practice? For example, for a persistency analysis if the data was available would you split data into homogeneous groups based on a combination of product type, duration in force and distribution channel?

If you were doing this to set lapse rate assumptions in a pricing or reserving model say, would you realistically set different rates for each homogeneous subgroup or just have a lapse rate for the overall product, in which case what would be the benefit of spending the time and effort analysing the lapse rate by homogeneous sub group?

Finally how much data would you actually require in each subgroup to provide meaningful results. Is there a rule of thumb, for example 10,000 polices?

Thanks,

Max
 
Hi Max

Yes, the CMP suggests that in practice often just these first three categories are used to break down the data.

I have been involved with this exercise in the past, and we mostly split by just product type, duration in-force and also by premium frequency (regular premium, single premium, paid-up). [We only had one distribution channel, otherwise probably would have done that split too.]

And yes, we did then have pricing assumptions which varied in the same way. These wouldn't be based directly on the raw analysis output, but with some smoothing, interpolation and rounding applied - so that we had a practical but still differentiated set of persistency assumption tables.

There is no hard and fast rule about what constitutes credibility by volume. However, if we were considering whether to split into another subdivision we would consider:
(a) are the results materially differentiated if we did use that extra split?
(b) if we had used that split in the past x years' data, are the results all over the place or are they relatively stable? If the former (ie all over the place from period to period) then that would be a good indicator that we don't have sufficient volume or credibility to use that further split.

Hope that makes sense.
 
Back
Top