M
MLC
Member
Hi,
When conducting experience analysis, assuming a sufficient volume of data is available to give credible results, how far down would you subdivide your data in practice? For example, for a persistency analysis if the data was available would you split data into homogeneous groups based on a combination of product type, duration in force and distribution channel?
If you were doing this to set lapse rate assumptions in a pricing or reserving model say, would you realistically set different rates for each homogeneous subgroup or just have a lapse rate for the overall product, in which case what would be the benefit of spending the time and effort analysing the lapse rate by homogeneous sub group?
Finally how much data would you actually require in each subgroup to provide meaningful results. Is there a rule of thumb, for example 10,000 polices?
Thanks,
Max
When conducting experience analysis, assuming a sufficient volume of data is available to give credible results, how far down would you subdivide your data in practice? For example, for a persistency analysis if the data was available would you split data into homogeneous groups based on a combination of product type, duration in force and distribution channel?
If you were doing this to set lapse rate assumptions in a pricing or reserving model say, would you realistically set different rates for each homogeneous subgroup or just have a lapse rate for the overall product, in which case what would be the benefit of spending the time and effort analysing the lapse rate by homogeneous sub group?
Finally how much data would you actually require in each subgroup to provide meaningful results. Is there a rule of thumb, for example 10,000 polices?
Thanks,
Max