Ch. 5 :: Markov Jump Chain

Discussion in 'CT4' started by sumondas, Aug 14, 2010.

  1. sumondas

    sumondas Member

    Dear All,

    I am facing problem to understand the Section.9 Of Chap. 5.

    I am unable to differentiate between "Markov Jump Chain Process" and "Markov Jump Process".

    The mathematical notation is not sending me clear idea about this Jump Chain process.

    Even the sentence in 3rd paragraph is not giving me any clear picture of this Jump chain.
    The sentence is: "The only way in which the jump chain differs from a standard Markov chain is when the jump process {Xt, t>=0} encounters an absorbing state. From that time on it makes no further transitions, implying that time stops for the jump chain."

    Please help me to understand this Jump Chain with some examples rather than by the help of strict mathematical notions.

    Thanks in advance !

    Regards,
    Sumon
     
  2. DevonMatthews

    DevonMatthews Member

    greetings,

    A markov jump chain (just called JUMP CHAIN) for short differes from a markov jump process (MJP) in the sense that it is simply a markov jump process only mapped at the times of the transitions. Ie it is the same process converted to discrete time. The problem with this is that since a MJP is in continuous time, when trying to model a process in an absorbing state with only a discrete state space no futher transitions are made, implying that time stops as the paragraph mentions (the MJP has infinite holding time). All it really means is that with a jump chain you just turn a realisation of the process into a string of states the process occupied. Say for a continuous time random walk on state space defined by the positive integers the jump chain might look like X_n = {0,1,0,1,2,3,4,3,4,3,2,1,0,-1.....}
     

Share This Page