• We are pleased to announce that the winner of our Feedback Prize Draw for the Winter 2024-25 session and winning £150 of gift vouchers is Zhao Liang Tay. Congratulations to Zhao Liang. If you fancy winning £150 worth of gift vouchers (from a major UK store) for the Summer 2025 exam sitting for just a few minutes of your time throughout the session, please see our website at https://www.acted.co.uk/further-info.html?pat=feedback#feedback-prize for more information on how you can make sure your name is included in the draw at the end of the session.
  • Please be advised that the SP1, SP5 and SP7 X1 deadline is the 14th July and not the 17th June as first stated. Please accept out apologies for any confusion caused.

Time Series -Chap13: "Autoregressive Model more convenient than Moving Average"

Bill SD

Ton up Member
Hi,
Quick question: The Core Reading for Time Series (Chapter 13, pg 37 in 2019 version) says: "In many circumstances an autoregressive model is more convenient than a moving average model."

Why is this the case and in what circumstances would it apply? Is it because an autoregressive processes only has one white noise/ error term and will always be invertible (which is good for statistical packages). Or is it because it is more common for a time series to have the features of an autoregressive processes -ie. an ACF which decays geometrically and a PACF which cuts off for k>p.

Thanks in advance!
 
I believe, though I could be wrong, that it is more convenient because it depends on past observable values - whereas the MA doesn't have that explicit connection.
 
I believe, though I could be wrong, that it is more convenient because it depends on past observable values - whereas the MA doesn't have that explicit connection.
Thanks John -why does a dependence on past values make it 'convenient' -because its easier to calculate and update over time?

Appreciate you didn't write it and suppose not so relevant for exams (so less urgent than other ppls questions).
 
Last edited:
Hi,
For autoregressive model, can you please mention what mu stands for.
Also, AR(p) and as per proof of result 13.2 on page 29, for autocovariance function the result is shown for k>=p. Any reason for this condition.
Thanks.
Sunil
 
Hi Sunil

If we have an AR(p) process written as:

\( X_t = \mu + \alpha_1 * (X_{t-1} - \mu) + ... + \alpha_p * (X_{t-p} - \mu) + e_t \)

Then \( \mu \) is the mean of the process.

Regarding page 29, although technically the result holds true for k < p, the reason we consider k >= p is to get the structure for a p-order difference equation, ie of the form:

\( y_p = a_1 y_{p-1} + a_2 y_{p-2} + ... + a_p y_0 \)

Hope this helps!

Andy
 
Hi Sunil

If we have an AR(p) process written as:

\( X_t = \mu + \alpha_1 * (X_{t-1} - \mu) + ... + \alpha_p * (X_{t-p} - \mu) + e_t \)

Then \( \mu \) is the mean of the process.

Regarding page 29, although technically the result holds true for k < p, the reason we consider k >= p is to get the structure for a p-order difference equation, ie of the form:

\( y_p = a_1 y_{p-1} + a_2 y_{p-2} + ... + a_p y_0 \)

Hope this helps!

Andy
Thanks Andy.

Sunil
 
Back
Top