• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Markov Chain

M

maryam

Member
Can anybody please explain to me how to arrive at the equation for the expected number of years taken to reach the 50% discount level given that policyholder is currently on discount level i.The question is from the revision note booklet 2 page 44. Am seriously lost.
 
Hi maryam
The first line of the solution on page 47 is:

m0 = 1 + (2/3)m0 + (1/3)m1

This is the same as:

m0 = (1 + m1) x (1/3) + (1 + m0) x (2/3)

You obtain this by looking at the first time step. You're currently in state 0, with expected time to reach state 2 equal to m0.

What can happen in the first time step? Either moves to state 1 (prob 1/3) or stays in state 0 (prob 2/3).

In the first case, the number of time steps that will be taken will be (1 + m1). This is because the first step has already happened, and now you are in state 1 there are a further m1 steps expected before finally reaching state 2 for the first time.

In the second case, the number of steps taken will be (1 + m0), because one step has already happened, and there are a further m0 steps expected until you can reach state 2 from here. Multiplying by the probabilities (so as to get an expectation) gets you to the above result.

You then simplify this, so as to get the first line of the solution. You can get the second line of the solution on page 47 by the same process but starting in state 1, rather than in state 0. Try it!
Does that help enough?
Robert
 
Back
Top