Hi, Quick question: The Core Reading for Time Series (Chapter 13, pg 37 in 2019 version) says: "In many circumstances an autoregressive model is more convenient than a moving average model." Why is this the case and in what circumstances would it apply? Is it because an autoregressive processes only has one white noise/ error term and will always be invertible (which is good for statistical packages). Or is it because it is more common for a time series to have the features of an autoregressive processes -ie. an ACF which decays geometrically and a PACF which cuts off for k>p. Thanks in advance!
I believe, though I could be wrong, that it is more convenient because it depends on past observable values - whereas the MA doesn't have that explicit connection.
Thanks John -why does a dependence on past values make it 'convenient' -because its easier to calculate and update over time? Appreciate you didn't write it and suppose not so relevant for exams (so less urgent than other ppls questions).
Hi, For autoregressive model, can you please mention what mu stands for. Also, AR(p) and as per proof of result 13.2 on page 29, for autocovariance function the result is shown for k>=p. Any reason for this condition. Thanks. Sunil
Hi Sunil If we have an AR(p) process written as: \( X_t = \mu + \alpha_1 * (X_{t-1} - \mu) + ... + \alpha_p * (X_{t-p} - \mu) + e_t \) Then \( \mu \) is the mean of the process. Regarding page 29, although technically the result holds true for k < p, the reason we consider k >= p is to get the structure for a p-order difference equation, ie of the form: \( y_p = a_1 y_{p-1} + a_2 y_{p-2} + ... + a_p y_0 \) Hope this helps! Andy