Why do we require the moving average process to be invertible and try to write the white noise process in terms of the X process?
Invertibility is important for computer fitted models as it allows us to retrospectively calculate the residuals from the observed data values. This then allows us to check whether the residuals behave like white noise and thus whether the model is a good approximation to the data. This is the third step of the Box-Jenkins method given in Chapter 13.
Alright. And, am i correct if i say that we require invertibility in an autoregressive process for it to be stationary, but in a moving average process for it to be used in a goodness of fitting tests?
No, invertibility and stationarity are two separate properties. One does not imply the other. Stationarity is necessary to fit a model and invertibility is necessary for goodness of fit tests.
But, for AR (1) process, it is given that in order to express process X in terms of historical white noise terms, we require alpha to be absoultely less than one, so that 1-(alpha)B is invertible, and so this implies that if process is invertible, then it is stationary too. Where am I getting wrong in this interpretation?
Ah! Now I understand the issue. Stationarity is equivalent to rewriting the time series as \(X_t = \) as a (possibly infinite convergent) series of white noise terms. Since the sum of stationary white noise terms is itself stationary. So yes you invert the formula but this if it is a convergent series of white noise terms then it is called stationary. However, invertibility is where you rewrite the time series as \(e_t = \) a (possibly infinite convergent) series of autoregressive terms. So yes in both you might invert the formula but they are different rearrangements and so one does not infer the other.