LFD Book Forum Q3, general question
 Register FAQ Calendar Mark Forums Read

#1
08-13-2012, 08:28 AM
 fgpancorbo Senior Member Join Date: Jul 2012 Posts: 104
Q3, general question

Is this a trick question, or just a direct application of the concepts learned in class, and present in the book? If want to pick the smallest such that , this is the same thing as saying pick the smallest such as . The example of a polynomial transformation is discussed in detail in the book and the generic formula for the upper bound is provided. Is there anything in this example that begs for a tighter bound that that provided by the formula that appears in page 105 of the book?

I have an even more general question about the VC dimension of these nonlinear transformations. I understand that in the transformed space one needs to apply the dichotomy analysis to come up with the VC dimension and that (since we did it for the general linear case) that is . Yet there is the caveat that some of the dichotomies that allow for that VC dimension might not be valid points of the transformation. But isn't that the case that the likelihood that the vast majority of points that would allow for such VC dimension will NOT be valid points of the transformation because we are trying to generate independent points out of degrees of freedom. Thus, in most cases the VC dimension is likely to be closer to than to , unless one gets really lucky.
#2
08-13-2012, 01:55 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,478
Re: Q3, general question

Quote:
 Originally Posted by fgpancorbo If want to pick the smallest such that , this is the same thing as saying pick the smallest such as .
Can you explain this?

Quote:
 I have an even more general question about the VC dimension of these nonlinear transformations. I understand that in the transformed space one needs to apply the dichotomy analysis to come up with the VC dimension and that (since we did it for the general linear case) that is . Yet there is the caveat that some of the dichotomies that allow for that VC dimension might not be valid points of the transformation. But isn't that the case that the likelihood that the vast majority of points that would allow for such VC dimension will NOT be valid points of the transformation because we are trying to generate independent points out of degrees of freedom. Thus, in most cases the VC dimension is likely to be closer to than to , unless one gets really lucky.
You are right about the difficulty in generating independent points in the transformed space. However, for the purpose of the VC dimension, we only need one such set of points, and the "independence" only pertains to our ability to shatter them even if they are not arbitrarily located in the transformed space.
__________________
Where everyone thinks alike, no one thinks very much
#3
08-13-2012, 02:23 PM
 fgpancorbo Senior Member Join Date: Jul 2012 Posts: 104
Re: Q3, general question

Thanks for the answer on the general question. The bottom line is that one has to be very careful with these transformations.

Quote:
 If want to pick the smallest such that , this is the same thing as saying pick the smallest such as . Can you explain this?
Maybe I am asking something trivial here but I don't get it, sorry if the question is irrelevant . And I also think that I made a mistake with my labels of the VC dimensions . Let's try it again. The question asks "What is the smallest value among the following choices that is  the VC dimension of a linear model in the transformed space?". If , this is the same as saying , right? Is I read it, the question seems to be asking about the upper bound of , a topic that is covered, for the class of polynomial transformations, in the book. Are we supposed to apply those concepts here, or, on the other hand, there is something particular to this transformation that begs for a tighter upper bound?
#4
08-13-2012, 04:52 PM
 tzs29970 Invited Guest Join Date: Apr 2012 Posts: 52
Re: Q3, general question

I got this one wrong...not because of the mathematical challenge, but because of the typographical challenge!

Anyone else with old eyes lose track of the little semicolons down among the little subscripts, and so miscount the number of features in the transformed space? Doh!

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 11:16 PM.