Hi,
Quick question: The Core Reading for Time Series (Chapter 13, pg 37 in 2019 version) says: "In many circumstances an autoregressive model is more convenient than a moving average model."
Why is this the case and in what circumstances would it apply? Is it because an autoregressive processes only has one white noise/ error term and will always be invertible (which is good for statistical packages). Or is it because it is more common for a time series to have the features of an autoregressive processes -ie. an ACF which decays geometrically and a PACF which cuts off for k>p.
Thanks in advance!
Quick question: The Core Reading for Time Series (Chapter 13, pg 37 in 2019 version) says: "In many circumstances an autoregressive model is more convenient than a moving average model."
Why is this the case and in what circumstances would it apply? Is it because an autoregressive processes only has one white noise/ error term and will always be invertible (which is good for statistical packages). Or is it because it is more common for a time series to have the features of an autoregressive processes -ie. an ACF which decays geometrically and a PACF which cuts off for k>p.
Thanks in advance!