#1




Exercise 1.12  Failing to make Ein(g) small enough
Let's say I run my machine learning algorithm for my friend, taking care to ensure Ein(g) and Eout(g) are close enough, but I find that my Ein(g) = .5 or something terrible like that. What are my options for continuing to solve the machine learning problem? Is there any way for me to go back and change my hypothesis set without losing the theoretical guarantees that Ein(g) is close to Eout(g)?

#2




Re: Exercise 1.12  Failing to make Ein(g) small enough
Quote:
Just because this uses a "hierarchy" of hypothesis sets (in this case the hierarchy being followed by upon failure of , folllowed by possibly other expansions if failed), there is in general an additional theoretical price to pay, but it is low. Look at structural risk minimization if you are further interested.
__________________
Where everyone thinks alike, no one thinks very much 
#3




Re: Exercise 1.12  Failing to make Ein(g) small enough
Just to elaborate a little on the last point in Yaser's answer.
Suppose your strategy is to use only if fails. If fails and you use it is natural that you should pay the price for the VC bound implied by . The interesting case is if succeeds and you get low . You cannot use the VC bound that applies for . It is the option to use in the event that fails that complicates the matter. This is why to get a correct theoretical bound, you must always specify your entire strategy first. The simplest strategy is to fix a hypothesis set. If it fails, it fails and you are done. If in the back of your mind you are thinking about the possibility of changing hypothesis sets if it fails, then this has to be taken into account in the theoretical analysis from the very begining, in particular, even if the first hypothesis set succeeds. As mentioned by Yaser, one framework that is useful in analyzing such adaptive strategies is structural risk minimization. Quote:
__________________
Have faith in probability 
Thread Tools  
Display Modes  

