• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Poisson Process : Markov Process

S

sahildh

Member
Can anyone please give an example of how the poisson process is a Markov process? I got a bit confused with the poisson process rate, the independent increments, the state space of the process.
N(t) records the number of occurrences of an event within the time interval 0 & t; and the events occur singly and at a rate lambda. So when an event occurs between 0 & t it adds up 1 to N(t). So I was wondering what is the Markov process here and what is the state space ?
Moreover having read the 'Poisson Process revisited' (page - 24 Chapter - 5) I was wondering that with the process jumping from 0 to 1, 1 to 2, and so on does the state 2 includes the state 1 (+1 increment) and state 3 includes state 2(+1 increment) and so on ???
I am really mixed up, a clear explanation would help me a lot. Thank You.:eek:
 
Last edited by a moderator:
Because, Poisson Process does not depend on its previous than current state(even Poisson process is does not depends on current state)

Here, states are number of occurrence, so you can say 2=1+1 etc
 
Thank you for the reply mate , but I need a more detailed explanation on the query.:)
 
Poisson process is basically a counting process, like no. of claims arrived until time t, So it's state space is {0, 1, 2, 3 ......}

It's not only markov but trivially markov, because the distribution of future events doesn't even depend on the current state of the process. The distribution of no. of events occurring in time t is Poi(λt) regardless of the current state of the process.

For example if I want to calculate the Prob. that the next claim arrives after 1hr given the process is in state

i) 3
ii) 4

then the required probability is same for both the cases i.e. exp(-λ) (assuming claims are arriving at the rate of λ per hour)
 
Back
Top