• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Q&A Bank

J

James123

Member
A couple of results in the solutions from the Q&A Bank I wanted to query from chapters 7/8/9:

1. Expectation[W(t)^B, W(1)] = min(t^b,1)

Is this just a set result we should just know or is this in some way derived?


2. In taylors formula multiplication table we know (dB(t))^2 = dt..........In the same way can we say that (B(t))^2 = t ?

Thanks in advance
 
Hi.

1. Maybe you could check what you've put here. Your expectation operator has two arguments and so I'm not sure I know what you mean. It looks like you've got some sort of covariance going on instead, maybe?

2. You're correct to say that \(dB(t)^2=dt\), but this only works because \(dB(t)\) is small. This cannot be extended to \(B(t)^2\). If \(B(t)^2\) did equal \(t\) then we could draw the whole process as just a straight line!

If you tell me which Q&A questions you're referring to that might help :)
 
Hi Steve

1. This is actually from Exam paper April2014 Q3(iii) - I am going by the examiners Report Solutions:

We are given:
X(t) = (t^alpha)(W(t)^Beta)

Subbing this into the LHS below we get:
E[(X(t)-W(1))^2] = t^(2alpha + Beta) + 1 - 2(t)^alpha *min(t^Beta,1)

I have broken down the LHS to be
= E[(X(t))^2] + E[(W(1))^2] - 2E[X(t)W(1)]

Where each term here corresponds to a term in the RHS of the above equation, therefore leaving us with :
E[(X(t))^2] = t^(2alpha + Beta)
E[(W(1))^2] = 1
2E[X(t)W(1)] = (2(t)^alpha)*min(t^Beta,1)

I don't understand how that final term is calculated - where does the min part come from?
If we break it down further:
2E[X(t)W(1)] = 2E[(t^alpha)(W(t)^Beta)W(1)] = 2(t^alpha)E[(W(t)^Beta)W(1)]

Therefore meaning that E[(W(t)^Beta)W(1)] = min(t^Beta,1)?


Apologies if its confusing how ive typed it out :)
 
Last edited by a moderator:
That's clearer, thanks! Just watch out for your notation as I think you mean W(t^Beta) rather than W(t)^Beta ;)

The two results you need for this are

(1) \(cov(A,B)=E[AB]-E[A]\times E[ B]\)

(2) \(cov(W(t),W(s))=\min(t,s)\)

If we let \(A=W(t^\beta)\) and \(B=W(1)\), then we can use result (1) to give
\[E[W(t^\beta)W(1)]=cov(W(t^\beta),W(1)) + E[W(t^\beta)] \times E[W(1)].\]
The last two expectations equal zero because \(W\) is a standard Brownian motion, and result (2) deals with the covariance to leaves us with
\[E[W(t^\beta)W(1)]=min (t^\beta,1).\] Please let me know if this doesn't make sense.
 
thanks steve, that makes perfect sense!

some further questions on the same topic:

1. I was looking at the ASET which offers an alternative answer to the same question - they instead use the conditions:

E[(W(t)-W(s)]=0 and Cov[W(t),W(s)]=min(s,t)

Whereas in the examiners report they used:

Var[W(t)]=0 and Var(W(t))=E[(W(t))^2] - E[W(t)]^2

- If we get this type of question where we asked to prove something is Standard Brownian Motion, can we use any combination of these conditions or we have to use certain conditions with each other?

- Also, What other conditions can we use?


2. Looking at the notation in the notes - I am confused as to the difference between W(t) and B(t)?
I was under the impression W(t) is used to denote General BM, whilst B(t) denotes standard BM. However, in this question W(t) denotes Standard BM.
Are these terms simply used interchangeably or ?

3. Final Question I promise: Geometric BM :
S(t)=exp[w(0)+sigmaB(t)+mu(t)]

I don't understand how from this we can derive that S(t) has a Lognormal[w(0)+mu(t),(sigma^2)t] distribution?
 
Last edited by a moderator:
1. The definition of Standard Brownian motion is given in Section 1.2 of Chapter 8. The different approaches you've mentioned are just alternative methods of verifying these conditions.
Var[W(t)]=0 and Var(W(t))=E[(W(t))^2] - E[W(t)]^2
Your first Var needs to be an expectation.

2. Any letter could be used to denote a process, so make sure you read the question carefully to understand what it's referring to.

3. The key here is to realise that \(B(t)\sim N(0,t)\). Now if \[S(t)=exp[W(0)+\sigma B(t)+\mu(t)]\] then \[\log S(t)=W(0)+\sigma B(t)+\mu(t)\] which means that \[ \log S(t) \sim N(W(0)+\mu(t), \sigma^2 t).\] Therefore \(S(t)\) is lognormally distributed with parameters \( W(0)+\mu(t) \) and \( \sigma^2 t\).
Hope that helps :)
 
Thanks for your help Steve, massively appreciated!

However, I am still failing to see how:

From logS(t)=W(0)+σB(t)+μ(t) we can say that logS(t)∼N(W(0)+μ(t),σ2t).

Many thanks!
 
Thanks for your help Steve, massively appreciated!

However, I am still failing to see how:

From logS(t)=W(0)+σB(t)+μ(t) we can say that logS(t)∼N(W(0)+μ(t),σ2t).

Many thanks!

Take a look at the terms on the right hand side:

\(W(0)\) is just a constant so it has zero variance and an expected value of, well, \(W(0)\).
\(\sigma B(t) \) is a constant multiple of standard Brownian motion (which we know has a \(N(0,t)\) distribution. Therefore \(\sigma B(t)\) has an expected value of zero and a variance of \(\sigma^2 t\) (are you happy about this bit?).
\(\mu(t)\) is a deterministic function in \(t\) so it has zero variance and an expected value of \(\mu(t)\).

This means that \(\log S(t)\) must be normally distributed, with mean \(W(0)+\mu(t)\) and variance \(\sigma^2 t\).
 
Hi steve, sorry maybe I'm having a brain dead moment here - I completely understand how we have calculated the expectation and variance of the RHS as required.

But how do we know it's a normal distribution - ie. once we've worked out the expectation and variance of the RHS as you've shown below, how do we then know it's not for example a Poisson distribution with this expectation and variance?
 
But how do we know it's a normal distribution?

Because there's only one thing on the right hand side that's even got a distribution :) It's the standard Brownian motion that introduces the random behaviour into the model - everything else there is either constant or a deterministic function. Since there's only one random variable in the whole expression, and it's normally distributed, then the whole thing must have a normal distribution.

Hope that helps?
 
Take a look at the terms on the right hand side:

\(W(0)\) is just a constant so it has zero variance and an expected value of, well, \(W(0)\).
\(\sigma B(t) \) is a constant multiple of standard Brownian motion (which we know has a \(N(0,t)\) distribution. Therefore \(\sigma B(t)\) has an expected value of zero and a variance of \(\sigma^2 t\) (are you happy about this bit?).
\(\mu(t)\) is a deterministic function in \(t\) so it has zero variance and an expected value of \(\mu(t)\).

This means that \(\log S(t)\) must be normally distributed, with mean \(W(0)+\mu(t)\) and variance \(\sigma^2 t\).

Can you help me understand this bit?

Therefore \(\sigma B(t)\) has an expected value of zero and a variance of \(\sigma^2 t\)
 
Take a look at the terms on the right hand side:

\(W(0)\) is just a constant so it has zero variance and an expected value of, well, \(W(0)\).
\(\sigma B(t) \) is a constant multiple of standard Brownian motion (which we know has a \(N(0,t)\) distribution. Therefore \(\sigma B(t)\) has an expected value of zero and a variance of \(\sigma^2 t\) (are you happy about this bit?).
\(\mu(t)\) is a deterministic function in \(t\) so it has zero variance and an expected value of \(\mu(t)\).

This means that \(\log S(t)\) must be normally distributed, with mean \(W(0)+\mu(t)\) and variance \(\sigma^2 t\).

Can you help me understand this bit?

Therefore \(\sigma B(t)\) has an expected value of zero and a variance of \(\sigma^2 t\)
 
Back
Top