![]() |
|
#1
|
|||
|
|||
![]()
Would anyone please give me a clue to part (c)? This seems rather counterintuitive to me.
Thanks a lot! |
#2
|
|||
|
|||
![]()
I think this can be a possible explanation for Exercise 4.10.(c):
When K is 1, then estimation of out-of sample error by validation error is not ‘that’ good because of the penalty term. Thus, the model chosen from this poor estimate might not be the ‘best’ one. This explains Expectation[Out-of-Sample Error of g^-_(m*)] < Expectation[Out-of-Sample Error of g_(m*)]. This situation somewhat improves as K increases. Please let me know if this explanation is not correct. |
#3
|
|||
|
|||
![]()
Also I wanted to validate my explanation of other parts of this exercise.
For part (b), this is what I think: As K increases, the estimation of out-of sample error by validation error gets better. That explains the initial decrease in Expectation[Out-of-Sample Error of g_(m*)]. Then, as K increases beyond the ‘optimal’ value, the training goes bad, which explains the rise. Please let me know if my understanding is correct or not. For part (a), I can't figure out the initial decrease in Expectation[Out-of-Sample Error of g^-_(m*)]. Any clue on this will be great. Thanks, Sayan |
#4
|
|||
|
|||
![]()
Well, I'm not sure about my understanding but here is my guess: (If they are not correct please tell me, especially for (c).)
(a) Because ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() (b) The reason for the initial decrease is already discussed above. A note here is that initially ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() (c) A possible case is that when ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Thank you. |
![]() |
Thread Tools | |
Display Modes | |
|
|