![]() |
#1
|
|||
|
|||
![]()
In Lecture 14, Professor mentions that because only the support vectors count towards W (the rest have alpha=0) which leads to a decrease in the number of features and thus, better generalization.
I'm not sure I got this point because I thought the VC dimension for W would be equal to d, the no. of dimensions of the space, regardless of the number of points being summed. Aren't we just summing the various "x" vectors, multiplied by alpha*y ? How does this decrease the number of features of W? Thank You! |
#2
|
||||
|
||||
![]() Quote:
![]() ![]()
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
I spotted a less sophisticated way of thinking about it which seems helpful to me.
If you merely assume that general points which are associated with a particular target (say +1) are more likely to be near points in a sample ![]() ![]() This ties in quite intuitively with the idea of distances from support vectors (or some sort of transformed distance if kernels are used) being the basis of the hypothesis. |
#4
|
|||
|
|||
![]()
Thank You Professor and Elroch for the answers! Clears things up.
|
![]() |
Tags |
doubt, lecture 14, support vector machines |
Thread Tools | |
Display Modes | |
|
|