Re: Time Series method similarities
Dear Yaser,
Thanks very much for your response. I did take a look at echapter6 and distinction between parametric and nonparametric models.
However, to clarify my question I was also wondering about the overall relationship between the key components of the “learning theory” and the techniques used in machine learning with the more traditional methods of fitting polynomial models to data.
Specifically, in the domain of Time Series Analysis we fit a polynomial of the time series (e.g. ARIMA models) using the input value and its previous values (X(t1), X(t2), X( t3), …) for the AR component and the forecast error values (e1, e2, e3, …) for the MA components and once fitted we proceed to use such a model to forecast values for X(t+1), X(t+2), etc.
Therefore, we are just fitting (i.e. learning the parameters from previous examples) a linear parameter polynomial with a view that the time series values are related and time lag correlated with a decay built in as we move away from the recent values.
There are two main questions for me,
1) Given the above explicit assumption about the nature of the data in time series  are the more generalized models such as NNs, SVMs and high dimensional feature regression models have better generalization properties than traditional time series models?
2) Given the procedures for properly implementing machine learning techniques such as the use of regularization to avoid overfitting, or VC dimensional analysis for understanding the number of examples needed, or application of cross validation sets for parameter selection and out of sample error estimate measures – don’t these areas theoretically overlap with methods used in fitting polynomials in time series model analysis?
I am trying to extend what we have learnt in this course and understand areas of theoretical and fundamental overlap and true differences between domains and methods.
Many thanks
