Confusion about definition

Discussion in 'CS2' started by ykai, May 20, 2023.

  1. ykai

    ykai Ton up Member

    1.Does periodic sate is the state can't back to itself in 1 step?
    Does aperiodic sate is the state can back to itself in 1 step?
    I am confused in both definition.

    2.In CMP-CS2-CH4-section9,"In other words, the jump chain possesses the Markov property and is itself a Markov chain.",In summary page,"The jump chain is a Markov chain in its own right",what does it mean?
    Markov chain is current state not affect by past,but doesn't "on its own right" mean not affectting by future state?
    If so, doesn't every stochastic processes meet condition?

    3.In CMP-CS2-CH4-section9,it mentioned that jump chain differ from Markov chain when they encounter absorbing state,but I don't understand where is their difference in it? It seems to stop in same way when they encounter absorbing state.
     
  2. ykai

    ykai Ton up Member

    4.In CMP-CS2-CH4-question4.5-(i)-page61
    the matrix I calculated is
    0 0.1 0.5 0.4 0
    0 0 0.3 0.7 0
    0 0 0 1 0
    0 0 0 0 1
    0 0 0 0 0
    I can't understand what other value come form.

    5.Bpp-CS2-Chapter 6 of the Course Notes-Survival models - Summary (pdf)-page4-exction1.9
    What is arg of plot(<x values>, <function>(<fixed arguments>, <arg> = <x values>)) mean?
     
  3. Andrew Martin

    Andrew Martin ActEd Tutor Staff Member

    Hello

    1.

    The period of a state is the highest common factor (HCF) of the possible return times to that state (ie the set of times n such that pii(n) > 0). If the HCF is 1, then we say that the state is aperiodic. If HCF > 1, we say the state is periodic with period d where d = HCF. If return is not possible, we let d = infinity.


    If it is possible to transition from a state to itself in one step, then that state must be aperiodic (as the return times are 1,2,3,4,5,... the HCF of which is 1). So it is not possible for there to be a non zero one step transition probability from a periodic state to itself.


    However, it is possible for a state to be aperiodic without it being possible to return to itself in one step. See the Markov chain after the definition of periodicity in the notes for example.


    2. / 3.

    A Markov jump process (MJP) operates in continuous time with a discrete state space (and importantly has the Markov property). A Markov chain operates in discrete time with a discrete state space (and is Markov). If you take the set up for a MJP but ignore the actual timings of the jumps and instead only consider the states visited, then this is the set up for a Markov chain (discrete state space, discrete time set (each 'step' is the times at which a jump occurs) and Markov - except for the one point below about absorbing states). The probabilities for the Markov chain are given by the MJP probabilities of the process jumping to a particular state when it leaves the current state (as covered in Section 7 of Chapter 4). For example, the probability of going to state i when leaving state k is given by mu_ki / sum(all the rates leaving state k) and this is pik in the underlying jump chain.

    As per the notes, the one thing to watch out for is absorbing states. For an absorbing state in a MJP, the process can never leave this state. We don't such a thing in a Markov chain (because there is ALWAYS a transition every discrete time step). The equivalent is a state for which the only possible transition is back into that state itself and so the chain effectively stays in that state indefinitely but there is still a transition from that state to itself at each discrete time step.

    It seems that you are unsure about the term 'in its own right' in particular. This statement is just saying that the jump chain is a Markov chain. The 'in its own right' is not referring to past / future states or anything along those lines. It is purely stating that the jump chain is Markov chain.

    4.

    This question wants the generator matrix (ie matrix of transition rates). The question gives us some conditional probabilities of jumping to particular states, given we leave a state (again see Section 7 of chapter 4). For example, given we are leaving state L, there is a 30% chance we go to state I and a 70% chance we go to state S. This tells us:

    mu_LI / (mu_LI + mu_LS) = 30%
    mu_LS / (mu_LI + mu_LS) = 70%

    We're also told that the average amount of time spent in state L is 10 days. The total time spent in a state is exponentially distributed with parameter equal to the sum of the transition rates leaving the state, ie here:

    T_L ~ exp(mu_LI + mu_LS)

    and we are told E[T_L] = 10. So 1 / (mu_LI + mu_LS) = 10

    Using this in the two probability statements above gives:

    10 mu_LI = 30% <=> mu_LI = 0.03
    10 mu_LS = 70% <=> mu_LS = 0.07

    Finally noting that mu_ii = - sum(rates leaving i), mu_LL = -0.1.

    The same approach can be taken for the other entries.

    Note that the diagram in the solutions actually show the transition probabilities for the underlying jump chain not transition rates (again except for the issue with the absorbing state). However, these probabilities help us work out the transition rates as per the equations above.

    5.

    Let's say we create a function as follows:

    my.func = function(a, b){
    a * b
    }

    Then let's say I want to plot the values of the function when a is 3 but b varies from 1 to 10, then I can use the following code:

    plot(1:10, my.func(a = 3, b = 1:10))

    Or say I wanted to plot the values when a is 4 but b varies from 1 to 10, then I can use:

    plot(1:10, my.func(a = 4, b = 1:10))

    <arg> is just the placeholder we've used to refer to the name of the argument that you wish to change. In this case <arg> is b and the <fixed arguments> are a as we are keeping that the same.

    Hope this helps!

    Andy
     
  4. ykai

    ykai Ton up Member

    Thank you for your detailed response.
    I have understood totally.
     

Share This Page