#1




calculation 4.8 on p.139
Hello,
I have a question regarding the calculation (4.8) on p.139 in the book. The final hypothesis depends, albeit indirectly, on the choice of the validation set . Indeed, by construction we train on the complement of in , which makes dependent on the choice of . The derivation (4.8) appears to rely on the assumption that is independent of . Does anyone have any comments on this? 
#2




Re: calculation 4.8 on p.139
I will try to reformulate the construction of Dval in such a way that the independence is patent, and then try to suggest where your confusion may be coming from.
Construction 1 of Dval: Randomly generate D. Randomly partition it into Dtrain (NK points) and Dval (K points). Learn on Dtrain to obtain and compute Eval of using Dval. (This is the standard validation setting in the book.) Construction 2 of Dval: Randomly generate NK points to form Dtrain. Learn on Dtrain to obtain . Now, randomly generate another K points to form Dval. Compute Eval of using Dval. It is patently clear in Construction 2 that we are computing Eout of , essentially by definition of Eout. There is no difference between constructions 1 and 2 in terms of the Dtrain and Dval they produce (statistically). Randomly generating N points and splitting randomly into NK and K points is statistically equivalent to first randomly generating NK points and then another random K points. In construction 1 you generate both Dtrain and Dval at the begining, process Dtrain and then test on Dval. In construction 2, you only generate Dval after you processed Dtrain. But Dval still has the same statistical properties in both cases. Now for where you may be getting subtly confused. It is true that the value of Eval will change based on what specific partition was selected, in part because changes and in part because Dval also changes. This means that depends on the partition. This is equivalently saying (in the construction 2 setting) that depends on the particular Dtrain generated (no surprise there). However, does not depend on the contents of Dval  if you change the data points in Dval, will not change. The expectation in (4.8) is an expectation over the data points in Dval, and does not depend on that (the partition is fixed and now we are looking at what points are in Dval). Quote:
__________________
Have faith in probability 
#3




Re: calculation 4.8 on p.139
Thank you very much for the lucid explanation!

#4




Re: calculation 4.8 on p.139
On the second thought, I am still not seeing the statistical equivalence of the two constructions. The first construction will always produce two disjoint sets and of sizes and , respectively. The second construction may easily produce two nondisjoint sets. Is this a problem?
Admin Edit: Replaced $ with [math] tag in math expressions. 
#5




Re: calculation 4.8 on p.139
Not a problem. The two constructions generate identically distributed Dtrain and Dval.
The partitioning approach in construction 1 generates data points with disjoint indices. That does not mean that the data points in Dtrain cannot also appear in Dval. (remember that when you generate D, the data points are iid so there can be repetitions.)
__________________
Have faith in probability 
#6




Re: calculation 4.8 on p.139
Thanks for the explanation! I did not realize that we are doing sampling with replacement to construct the training set and the validation set. Now it is clear why the two constructions you gave are equivalent!
The first step of the Calculation 4.8 on p.139 is still not sinking into my brain somehow. It feels like in the first step we pulled out a variable (namely, $\mathcal{D}_{val}$) from under the integral sign and placed it as an indexing set for the summation. What is the correct mathematical interpretation of this step? Thank you for your clear explanations and patience! 
Thread Tools  
Display Modes  

