View Single Post
  #6  
Old 04-11-2013, 12:49 AM
Rahul Sinha Rahul Sinha is offline
Junior Member
 
Join Date: Apr 2013
Posts: 9
Default Re: Lecture 3 Q&A independence of parameter inputs

Quote:
Originally Posted by Elroch View Post
Moobb, having rewatched the Q&A, my understanding is this. The independence that is important is that the input points are independently selected. Intuitively, they are a representative sample, rather than one which gives disproportionate importance to some region of the input space.

With regard to the features, these are a generalisation of co-ordinates which are used to describe the input data points (eg the value of a moving average is a feature which can be thought of as a co-ordinate, even though it is defined in terms of many co-ordinates). The independence that is preserved after a transformation is the independence between the data points, not the features: the set of points remains a representative sample of the (transformed) space of possible inputs.
Awesome explanation.

To add an example: Consider a Gaussian distribution with non Diagonal covariance matrix in 2D space. It is obvious that Features (read axis) are correlated or non-independent. Performing a change of co-ordinate system, let's now have the eigenvector directions as the new co-ordinate system. No information is lost in the transformation (The space did not shrink or expand!) but now we have independent orthonomal co-ordinates. As pointed out, what is preserved is the "independence between the data points not the features".
Reply With Quote