View Single Post
  #7  
Old 05-06-2016, 02:42 AM
ntvy95 ntvy95 is offline
Member
 
Join Date: Jan 2016
Posts: 37
Default Re: Help in understanding proof for VC-dimension of perceptron.

Quote:
Originally Posted by MaciekLeks View Post
Yes, I tried to write it in Context part of my post.



Unfortunately it does not help. I understand the assumption, but I do not understand the source of confidence that we can correlate a_{i} with perceptron outputs.
I'm not sure if I understand your point right:

If the perceptron hypothesis set can shatter the data set then for the same x_{i}, there exists w' and w'' such that sign(w'x_{i}) = 1 and sign(w''x_{i}) = -1. It means that if the data set is shatterable by the perceptron hypothesis set, for any x_{i} in the data set, we can choose the case to be y_{i} = 1 or y_{i} = -1 and be sure that there exists at least one hypothesis from perceptron hypothesis set can implement the case we have chosen.

Instead of choosing y_{i} = 1 or y_{i} = -1 explicitly, the proof let the choice for the value of y_{i} depends on the value of a_{i}: y_{i} = sign(a_{i}), because for whatever the real value of a_{i} is, sign(a_{i}) only has value of 1 or -1, hence y_{i} = 1 or y_{i} = -1. In my understanding, this dependence does not make the chosen dichotomy invalid.
Reply With Quote