View Single Post
  #6  
Old 05-06-2016, 01:40 AM
MaciekLeks MaciekLeks is offline
Member
 
Join Date: Jan 2016
Location: Katowice, Upper Silesia, Poland
Posts: 17
Default Re: Help in understanding proof for VC-dimension of perceptron.

Quote:
Originally Posted by ntvy95 View Post
Here is my understanding:

For any data set of size d + 2, we must have linear dependence, and the question is: With such inevitable linear dependence, can we find at least a specific data set that can be implemented 2^N dichotomies? The video lecture shows that for any data set of size d + 2, there are some dichotomies (specific to the data set) that the perceptron cannot implement, hence there's no such a data set of size d + 2 can be shattered by the perceptron hypothesis set.
I agree. I tried to write it in Context part of my post.

Quote:
Originally Posted by ntvy95 View Post
The proof tries to consider some dichotomies (specific to the data set) have two following properties:
- x_{i} with non-zero a_{i} get y_{i} = sign(a_{i}).
- x_{j} gets y_{j} = -1.

Hope this helps.
Unfortunately it does not help. I understand the assumption, but I do not understand the source of confidence that we can correlate a_{i} with perceptron outputs.
Reply With Quote