GLIM Modelling Approach - Capping of Claims

Discussion in 'SP8' started by Entact30, Nov 12, 2013.

  1. Entact30

    Entact30 Member

    I wasn't sure where to put this post as it's more a practical pricing question than a course specific query.

    I have a couple of questions which (hopefully) people more experienced than I in this area might be able to clarify. I'm having some difficulty in the modelling stages of a GLIM.

    The first issue is in relation to the capping of claims as part of a household GLIM. My query relates to the reasons for doing this. In a severity model when modelling the peril Fire, the explanation I have been given is that capping very large 'total loss' claims at a level removes distortions in your modelling results. This doesn't make sense to me as it seems as though this approach removes valuiable information about the impact of factors on severity.

    If anyone knows of a good source for more info on this I would really appreciate it.

    Basically, why cap, at what level, and for which perils.

    Thanks in advance
     
  2. interested

    interested Member

    Hi,

    If you were modelling household fire claims, you would probably find that there were very few total losses in your data (but more smaller claims).

    Imagine, for the sake of explanatioon, an extreme example in which there was only one total loss in the data and that this property happened to be a 30 year old 2-bedroomed house in Norwich etc etc.

    Now if you were to leave this total loss in the data, it would probably have quite a significant effect on the model (depending, of course, on its size relative to the other partial losses). This could mean that the relativities for 2-bedroomed houses are higher than they would otherwise be, or that rates in Norwich would come out as higher than they should.

    It may be that there is a reason why 2-bedroomed houses in Norwich represent a higher risk than other properties (especially in terms of frequency), so we don't want to exclude this claim altogether. But it's also likely that there was an element of unluckiness here in that this fire turned into a total loss rather than being able to be put out sooner. That element of unluckiness would represent the random noise that occurs in any model and so we wouldn't want that to be a feature. Remember that the purpose of the model is to predict the future and not to replicate every single aspect of the past. The next total loss might be to a 4-bedroomed house in Cambridge, for example - it probably won't be in Norwich next time. So we don't want to unduly load the Norwich rates.

    So, we don't want to exclude it altogether but we also don't want it to have an undue effect on the results. The answer is to truncate it. You would include the truncated amount in the GLM and then load for the excess bit in some way (either averaged over everyone or more targeted at higher risk groups).

    The million dollar question then is at what level to cap it. There is no right answer and this is where experience comes into play (GLMs are an art, not a science). You need to use judgement in deciding on how much of an effect you want these large claims to have. You could start off by looking at the distribution of claims amounts and take it from there. Also, does the reserving team have a definition of large claims that you could be consistent with?

    Note that the method for loading/spreading the excess amount also requires judgement and experience.

    In terms of other perils, the cap is likely to vary by peril as a "large" fire claim may be very different to a "large" subsidence claim.

    Good luck.
     
  3. Entact30

    Entact30 Member

    Thank you for the very clear response - it was very helpful.

    So I suppose it really depends on the tail of the severity distributions. If it's a case that the 99th percentile claim amount is not too far from the mean then you mighten't want to cap at this level but if it's significantly above it then you might be more inclined to cap.

    I also assume that in terms of distorting results the effect of not capping would be worse on a rating factor where the risk belonged to a level with a low amount of exposure (e.g. a remote area) compared with one where the exposure was high (e.g. no of bedrooms rooms = 3).
     
  4. interested

    interested Member

    Yes, and yes :)
     

Share This Page