![]() |
|
#1
|
|||
|
|||
![]()
Thank you Prof. Yaser. Your book is really easy to follow. I have just started it for a week and I am trying to finish every exercises in the book.
About exercise 1.11, I don't know where to check the answer so I post it here. Could you please tell me whether my answers are right or wrong? Is there any place that I can check my answer on exercise by myself? Ex 1.11: Dataset D of 25 training examples. X = R, Y = {-1, +1} H = {h1, h2} where h1 = +1, h2 = -1 Learning algorithms: S - choose the hypothesis that agrees the most with D C - choose the hypothesis deliberately P[f(x) = +1] = p (a) Can S produce a hypothesis that is guaranteed to perform better than random on any point outside D? Answer: No In case that all examples in D have yn = +1 (b) Is it possible that the hypothesis that C produces turns out to be better than the hypothesis that S produces? Answer: Yes (c) If p = 0.9, what is the probability that S will produce a better hypothesis than C? Answer: P[P(Sy = f) > P(Cy = f)] where Sy is the output hypothesis of S, Cy is the output hypothesis of C + Since yn = +1, Sy = +1. Moreover, P[f(x) = +1] = 0.9 --> P(Sy = f) = 0.9 + We have, P(Cy = +1) = 0.5, P(Cy = -1) = 0.5, P[f(x) = +1] = 0.9, P[f(x) = -1] = 0.1 --> P[Cy = f] = 0.5*0.9 + 0.5*0.1 = 0.5 Since 0.9 > 0.5, P[P(Sy = f) > P(Cy = f)] = 1 (d) Is there any value of p for which it is more likely than not that C will produce a better hypothesis than S? Answer: p < 0.5 I am not sure that my answer of (a) and for (c) is not conflict. Thank You and Best Regards, |
#2
|
||||
|
||||
![]()
Your answers to (a) and (c) are both correct. They are not in conflict since (a) is asking a deterministic question while (c) is asking a probabilistic question.
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
Prof. Yaser, thank you very much for your replying. I will keep studying. Thank you!
|
#4
|
|||
|
|||
![]() Quote:
Given p = 0.9, h1 is a better hypothesis than h2. Hence, the probability that S produces a better hypothesis than C is the probability that S picks h1 essentially as C will pick the other hypothesis that S doesn't pick. In other words, P[S produces a better hypothesis than C] = P[S picks h1 based on the 25 training examples]. S will pick h1 if 13 out of 25 training examples give +1, so we will have: P[S picks h1] = P[13 or more out of 25 training examples give +1] = ![]() = 0.9999998379165839813935344 It is quite different from tatung2112's explanation for c. Could you comment further? Thanks! Last edited by henry2015; 05-29-2016 at 08:48 AM. Reason: fixing latex syntax |
#5
|
|||
|
|||
![]() Quote:
Now, I am even more confused. |
#6
|
||||
|
||||
![]()
I think henry2015's detailed steps are the right way to go, while Yaser's old comments are just highlighting that (a) and (c) do not conflict with each other. Thanks for asking.
__________________
When one teaches, two learn. |
#7
|
|||
|
|||
![]()
Thanks for confirming!
Appreciate it! |
#9
|
|||
|
|||
![]() Quote:
Here is my reasoning for the (c) part: the event S produces a better hypothesis than C means that ![]() ![]() ![]() ![]() ![]() |
#10
|
|||
|
|||
![]()
Hi,
according to the first post, I can't understand why the answer to the question (d) is p < 0.5. Intuitively my answer is that there are no values of p that make probabilistically C better than S. That's why S try to minimize the error on the training data which should reflect the true distribution. In this case, C do better than S only if (the majority of the examples are +1 GIVEN p < 0.5) OR (the majority of the examples are -1 GIVEN p > 0.5). However both the cases are less probable than the ones for which S works better. As a results, there are no value for p to reverse the situation. Am I right ? |
![]() |
Thread Tools | |
Display Modes | |
|
|