• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

A5 April 2005

A

Aditya jain

Member
How is the model markov chain?
Suppose one is at 40% level, and he makes a claim we'll need to know about the previous claim history (whether he made a claim previously or not) to tell whether he will be at 0% level or 25% level.
On the other hand if one is at 60% level and he makes a claim he will immediately be demoted to 40% level and then if he again makes a claim he will be moved to 0% level. So the probability of going from 60% to 25% should be 0 right?
 
How is the model markov chain?
Suppose one is at 40% level, and he makes a claim we'll need to know about the previous claim history (whether he made a claim previously or not) to tell whether he will be at 0% level or 25% level.
On the other hand if one is at 60% level and he makes a claim he will immediately be demoted to 40% level and then if he again makes a claim he will be moved to 0% level. So the probability of going from 60% to 25% should be 0 right?

There is no need to know about the previous claim history for this question. However, there are indeed questions (e.g. Question A3 of April 2006) where you may need to know, in which case you split it into 2 states (say Level 3a and Level 3b). In any case, for each state using Markov Chain, the transitions must be independent from whatever happened in the past and can only be dependent on its current state.

So, if the policyholder is currently at the 40% level, then the number of claims they make in this current period (and nothing else) will determine where they be in the next time period. If no claims, then they will be at the 60% level (0.85 probability); if one claim, at the 25% level (0.12 probability); and more than one claim, at the 0% level (1-0.85-0.12=0.03 probability); it is impossible to remain at the 40% level over consecutive time periods. There is no demotion within the same time period because unlike Markov Jump process, transition are only allowed to occur at the end of the time period, so count the total number of claims within that particular time period to determine where they start in the next time period.
 
There is no need to know about the previous claim history for this question. However, there are indeed questions (e.g. Question A3 of April 2006) where you may need to know, in which case you split it into 2 states (say Level 3a and Level 3b). In any case, for each state using Markov Chain, the transitions must be independent from whatever happened in the past and can only be dependent on its current state.

So, if the policyholder is currently at the 40% level, then the number of claims they make in this current period (and nothing else) will determine where they be in the next time period. If no claims, then they will be at the 60% level (0.85 probability); if one claim, at the 25% level (0.12 probability); and more than one claim, at the 0% level (1-0.85-0.12=0.03 probability); it is impossible to remain at the 40% level over consecutive time periods. There is no demotion within the same time period because unlike Markov Jump process, transition are only allowed to occur at the end of the time period, so count the total number of claims within that particular time period to determine where they start in the next time period.
Thanks. I got it.
 
Back
Top