• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Time series- Box Jenkins- Estimation

A

Aravind Jayaraman

Member
While computing the variance of the sample white noise terms, why r we using the formula
1/n * [E(X^2) - n *xbar^2] instead of the general formula 1/(n-1) * [E(X^2) - n *xbar^2] ????

Any specific reason?
 
Hello

Just to note, I think there is bit of inconsistency in your notation, you're switching between expectations of random variables and sample quantities in the same expression.

There are various ways to estimate parameters. For example, if we take a sample from the N(mu, sig^2) distribution then two common methods of estimation for sig^2 are:

Method of moments

There are actually two approaches we could take here:

1. Set sample mean = theoretical mean and sample variance (using the usual n-1 denominator) = population variance

2. Set sample mean = theoretical mean and sum(i = 1, n)[xi^2] / n = E[X^2]

The second option is the way that the method of moments is described in CS2 in the context of Chapter 15; however, both of these approaches for the method of moments are mentioned in CS1.

Using approach 1, we get an estimate of 1/(n-1) * sum(i = 1, n)[(xi - xbar)^2] for sig^2

Using approach 2, we get an estimate of 1/n * sum(i = 1, n)[(xi - xbar)^2] for sig^2, ie in this case we are setting what we might call the n-denominator sample variance equal to the population variance.

Approach 2 leads to a biased estimator; however this bias reduces to 0 as the sample size increases.

Maximum likelihood

The maximum likelihood estimate of sig^2 is 1 / n * sum(i = 1, n)[(xi - xbar)^2]. There are various reasons why we generally prefer ML estimators. For example, the asymptotic distributions of the ML estimator here is normal. Even though the estimator is biased, it is asymptotically unbiased.

Time series

Although not quite the same situation, we can apply similar ideas to the estimation of sig^2 in the case of a time series model.

Note that in the formula given in Chapter 14, the number of squared error terms being summed is n - p. So, we could consider an estimate which uses a factor of 1 / (n-p) rather than 1/n. However, again, if n is sufficiently large, then the difference between using n or n-p will be small, even if p is quite large.

Hope this helps!

Andy
 
Back
Top