Markov chain breakdown

Discussion in 'CT4' started by snerap@gmail.com, Jul 16, 2017.

  1. snerap@gmail.com

    snerap@gmail.com Active Member

    Somebody please explain how do we break down a chain that is not markov, to make it markov? What are the pointers that we need this look for when we face such a question?
     
  2. We add additional state(s) to ensure that the Markov property is followed.

    For example in Q6 September 2014,
    We split the 2nd state in two states so that we can predict if someone makes a claim from that state, where will he go.

    Q12 from April 2011 paper was also pretty similar
     
  3. snerap@gmail.com

    snerap@gmail.com Active Member

    I understood this much! Its like sometimes, even in a 4 state model, we break down 1 state, sometimes 2. What are the ground rules, if any?
     
  4. It's just to ensure that the Markov property is fulfilled.
    There is no fixed way of splitting, it depends on the question.

    Which question are you talking about?
     
  5. Mark Mitchell

    Mark Mitchell Member

    Aditya is right - we split states to ensure the Markov property holds. How many states are split depends on the particular scenario.

    My advice is to consider each of the states in turn and ask yourself:
    Do I know the onward transition probabilities (ie probabilities of what will happen next) just by knowing the current state, or do I need extra information?
    If knowing the current state is enough, then that state is Markov and does not need to be split. If you need extra information, then you need to split the state, in order to incorporate the extra information needed in the state itself.

    It might help to have a look at the description in the following thread:
    https://www.acted.co.uk/forums/index.php?threads/subject-103-april-2003-ques-6.14019/
     
  6. snerap@gmail.com

    snerap@gmail.com Active Member

    Thanku both!! This helped!!
     
    Aditya mohan mathur likes this.

Share This Page