View Single Post
  #2  
Old 04-10-2013, 03:34 PM
Elroch Elroch is offline
Invited Guest
 
Join Date: Mar 2013
Posts: 143
Default Re: Lecture 3 Q&A independence of parameter inputs

Quote:
Originally Posted by Moobb View Post
There is a discussion about the importance of having independent input data and how this propagates to features. Is it true that features necessarily inherit independence from data? If they don't, how bad is that? For example, in Finance there are quite a few studies using support vector machines using a grid defined by different moving averages, which overlap (1w, 1m, etc). In this case the features are clearly not independent. Would this be seen as a questionable procedure?
Could you be more precise about which place in the book or lectures you are referring to regarding independence?

With regard to the choice of features for representing financial data, it is not difficult to remove the more obvious dependencies, but it is not clear that this is crucial. As an analog, suppose you have a basis for the plane (1,0) and (1,1). There is clearly a correlation between these two axes in your sense, but a simple linear transformation to a basis of (1,0) and (0,1) gets rid of it. If you are going to use kernels, you will be permitting many transformations of this type or others. The same is true of moving averages, where you can replace them with carefully chosen differences between them if you wish, but it may not be crucial.
Reply With Quote