View Single Post
  #9  
Old 05-06-2016, 09:08 AM
ntvy95 ntvy95 is offline
Member
 
Join Date: Jan 2016
Posts: 37
Default Re: Help in understanding proof for VC-dimension of perceptron.

Quote:
Originally Posted by MaciekLeks View Post
That's the point. This part is crucial: "the proof let the choice for the value of y_{i} depends on the value of a_{i}". The only certain correlation is y_{i}=sign(\mathbf{w}^{T}\mathbf{x}_{i})). How do we know (definition, theorem, lemma,...) that sign(a_{i}) works the same as sign(\mathbf{w}^{T}\mathbf{x}_{i})? IMHO it cannot be drawn from the \mathbf{x}_{d+2}=\sum_{i=1}^{d+1}a_{i}\mathbf{x}_{i}, (where not all a_{i} coefficients are zeros).
I'm not sure if this is the point you are talking about: Here is my understanding:

If the perceptron hypothesis set can shatter the data set then: If y_{i} = sign(a_{i}) for every x_{i} that a_{i} \neq 0 then there exists such a w that satisfies sign(w^{T}x_{i}) = sign(a_{i}) for every x_{i} that a_{i} \neq 0. In other words, we first have y_{i} = sign(a_{i}) for every x_{i} that a_{i} \neq 0, then we must be able to find a w that satisfies the equation y_{i} = sign(a_{i}) = sign(w^{T}x_{i}) if the perceptron hypothesis set can shatter the data set.
Reply With Quote