LFD Book Forum d-dimensional Perceptrons and break points (related to Q4 of homework)
 Register FAQ Calendar Mark Forums Read

#1
04-23-2013, 07:07 AM
 jlaurentum Member Join Date: Apr 2013 Location: Venezuela Posts: 41
d-dimensional Perceptrons and break points (related to Q4 of homework)

Hello:

In slide 9 of lecture 5 (minute 33:03), the Professor gives an example of 3 colinear points for which there can be no possible hypothesis. Still, "it doesn't bother us because we want the maximum bound of possible dichotomies", so k=3 is not considered as a breakpoint. My question is:

In a d-dimensional perceptron, it appears we would not consider a set of points lying in a (d-1)-dimensional hyperplane as candidates for giving an "impossible" dichotomy. Why? Is it because the probability of picking such a set of points from the input space that all lie in a (d-1) dimensional space is zero? (As in the case of picking 3 collinear points in a plane).
#2
04-23-2013, 08:13 AM
 IsidroHidalgo Member Join Date: Apr 2013 Location: Toledo (Spain) Posts: 28
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

No, the probability isn't cero. The question is that we are interested in the maximum of points our hypothesis can shatter. So you must take a set of points that maximizes the probability of shatter the most...
#3
04-23-2013, 09:48 AM
 Elroch Invited Guest Join Date: Mar 2013 Posts: 143
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Quote:
 Originally Posted by jlaurentum Hello: In slide 9 of lecture 5 (minute 33:03), the Professor gives an example of 3 colinear points for which there can be no possible hypothesis. Still, "it doesn't bother us because we want the maximum bound of possible dichotomies", so k=3 is not considered as a breakpoint. My question is: In a d-dimensional perceptron, it appears we would not consider a set of points lying in a (d-1)-dimensional hyperplane as candidates for giving an "impossible" dichotomy. Why? Is it because the probability of picking such a set of points from the input space that all lie in a (d-1) dimensional space is zero? (As in the case of picking 3 collinear points in a plane).
It's worth observing that the set of -dimensional perceptrons, restricted to a -dimensional subspace, is simply , the set of -dimensional perceptrons on that subspace. hence, the capabilities of restricted to the subspace is the same as that of .

It turns out that the power of the hypothesis set comprising perceptrons increases as the dimension of their domain increases. The three points are a good example. If co-linear, they cannot be shattered, regardless of what dimension space they are in. If not co-linear, they can always be shattered: this requires the domain to be at least -dimensional.
#4
04-23-2013, 09:55 AM
 jlaurentum Member Join Date: Apr 2013 Location: Venezuela Posts: 41
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Ok. This 3 point set: +1 -1 +1 cannot be shattered if the 3 points are collinear, no matter what dimension the perceptron is. Why isnt three the break point for a 2-d perceptron (or a 3-d perceptron, for that matter)? What is the reason that we must consider point sets that are in the same dimension as the input space?
#5
04-23-2013, 09:57 AM
 Elroch Invited Guest Join Date: Mar 2013 Posts: 143
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

That's simply a matter of the definition!

The break point is the (minimum) value of such that no set of points can be shattered. To put it another way, it is the (minimum) value of such that every set of points fails to be shattered. Finding one set of points that fails to be shattered is consistent with the existence of a break point, but you need to demonstrate all other sets of points have the same property.
#6
04-23-2013, 11:05 AM
 jlaurentum Member Join Date: Apr 2013 Location: Venezuela Posts: 41
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Now I'm confused. The break point for 2-d perceptrons is 4. In lecture 5, one example of a 4-point set is given that is not shatterable. However, there are other 4-point sets that are (shatterable). Likewise for positive rays, positive intervals, where the break point is 2 and 3 respectively.
#7
04-23-2013, 11:53 AM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Quote:
 Originally Posted by jlaurentum The break point for 2-d perceptrons is 4. In lecture 5, one example of a 4-point set is given that is not shatterable. However, there are other 4-point sets that are (shatterable).
Hi,

It is actually not possible to shatter any set of 4 points using the 2-dimensional perceptron. Perhaps we can discuss the set of points you have in mind and look for which dichotomies would be impossible there.
__________________
Where everyone thinks alike, no one thinks very much
#8
04-23-2013, 02:13 PM
 kafar Junior Member Join Date: Apr 2013 Posts: 6
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Quote:
 Originally Posted by yaser Hi, It is actually not possible to shatter any set of 4 points using the 2-dimensional perceptron. Perhaps we can discuss the set of points you have in mind and look for which dichotomies would be impossible there.
Hi Prof,

there are several questions related to finding the break point of a given hypothesis restriction.

So I'm wondering: is there a procedural method that we can follow to find the break points?

It is fine using imagination and permutation in one,two or even three dimensional space. But as the dimension increases (often with increasing break point), it seems to be harder and harder to go through each permutation of classification and determine if a given set of points can be shattered.

Or did i miss something in the lecture?
#9
04-23-2013, 02:49 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Quote:
 Originally Posted by kafar is there a procedural method that we can follow to find the break points?
There is no systematic way that applies to all cases, but it is usually not too difficult to get some upper bound of when the growth function breaks. Fortunately, only an upper bound is needed to carry through the theory, as you will see in Lecture 7.
__________________
Where everyone thinks alike, no one thinks very much
#10
04-23-2013, 04:03 PM
 jlaurentum Member Join Date: Apr 2013 Location: Venezuela Posts: 41
Re: d-dimensional Perceptrons and break points (related to Q4 of homework)

Thanks for intervening Professor.

The 4-point set I have in mind is the set of 4 corners of a square. The concept of "shattering" I am working under corresponds to being able to propose a hypothesis that conforms to the classification given for each of the points in the dataset. So in the case of a 2-d perceptron, It is easy to "shatter" (if my def. of shattering is correct) the 4 corners in a square if 2 of the corners in one side of the square are red and the other 2 corners are blue - just pass a line in the middle of the square such that the two red points are on one side of the line and the other 2 are on the opposite side. Got your colors inverted? No problem, just multiply the w vector by -1.

Now if the red points are on opposite corners (and the blue as well), then we couldn't shatter them because no matter what boundary line we choose, we always get either both sides with the same color or two colors on each side. That's how I understand that the break point for the 2-d perceptron is 4- because there exists a 4-point set that is not shatterable.

Obviously there is a mistake in my concepts somewhere because if you choose a 3 point set in which all points are collinear and you set the middle point to blue and the outer points to red, this is not shatterable by a 2-d perceptron, or a 3-d perceptron or any higher dimensional perceptron.

I hope I made clear what my doubts are and where is my confusion.

 Tags break points, perceptron

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 09:24 AM.