LFD Book Forum Exercise 1.12 - Failing to make Ein(g) small enough
 User Name Remember Me? Password
 FAQ Calendar Mark Forums Read

 Thread Tools Display Modes
#1
09-09-2014, 06:02 PM
 PhilW Junior Member Join Date: Sep 2014 Posts: 1
Exercise 1.12 - Failing to make Ein(g) small enough

Let's say I run my machine learning algorithm for my friend, taking care to ensure Ein(g) and Eout(g) are close enough, but I find that my Ein(g) = .5 or something terrible like that. What are my options for continuing to solve the machine learning problem? Is there any way for me to go back and change my hypothesis set without losing the theoretical guarantees that Ein(g) is close to Eout(g)?
#2
09-09-2014, 07:19 PM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,477
Re: Exercise 1.12 - Failing to make Ein(g) small enough

Quote:
 Originally Posted by PhilW Let's say I run my machine learning algorithm for my friend, taking care to ensure Ein(g) and Eout(g) are close enough, but I find that my Ein(g) = .5 or something terrible like that. What are my options for continuing to solve the machine learning problem? Is there any way for me to go back and change my hypothesis set without losing the theoretical guarantees that Ein(g) is close to Eout(g)?
Let us say that is the hypothesis set that didn't work, and you want now to try another hypothesis set . The theoretical guarantees would still hold, but for the equivalent hypothesis set .

Just because this uses a "hierarchy" of hypothesis sets (in this case the hierarchy being followed by upon failure of , folllowed by possibly other expansions if failed), there is in general an additional theoretical price to pay, but it is low. Look at structural risk minimization if you are further interested.
__________________
Where everyone thinks alike, no one thinks very much
#3
09-11-2014, 04:00 AM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 1.12 - Failing to make Ein(g) small enough

Just to elaborate a little on the last point in Yaser's answer.

Suppose your strategy is to use only if fails.

If fails and you use it is natural that you should pay the price for the VC bound implied by .

The interesting case is if succeeds and you get low . You cannot use the VC bound that applies for .

It is the option to use in the event that fails that complicates the matter.

This is why to get a correct theoretical bound, you must always specify your entire strategy first. The simplest strategy is to fix a hypothesis set. If it fails, it fails and you are done. If in the back of your mind you are thinking about the possibility of changing hypothesis sets if it fails, then this has to be taken into account in the theoretical analysis from the very begining, in particular, even if the first hypothesis set succeeds.

As mentioned by Yaser, one framework that is useful in analyzing such adaptive strategies is structural risk minimization.

Quote:
 Originally Posted by yaser Let us say that is the hypothesis set that didn't work, and you want now to try another hypothesis set . The theoretical guarantees would still hold, but for the equivalent hypothesis set . Just because this uses a "hierarchy" of hypothesis sets (in this case the hierarchy being followed by upon failure of , folllowed by possibly other expansions if failed), there is in general an additional theoretical price to pay, but it is low. Look at structural risk minimization if you are further interested.
__________________
Have faith in probability

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 06:09 PM.

 Contact Us - LFD Book - Top