LFD Book Forum Exercise 1.11
 Register FAQ Calendar Mark Forums Read

#1
02-07-2014, 07:07 AM
 tatung2112 Junior Member Join Date: Feb 2014 Posts: 4
Exercise 1.11

Thank you Prof. Yaser. Your book is really easy to follow. I have just started it for a week and I am trying to finish every exercises in the book.

About exercise 1.11, I don't know where to check the answer so I post it here. Could you please tell me whether my answers are right or wrong? Is there any place that I can check my answer on exercise by myself?

Ex 1.11:
Dataset D of 25 training examples.
X = R, Y = {-1, +1}
H = {h1, h2} where h1 = +1, h2 = -1
Learning algorithms:
S - choose the hypothesis that agrees the most with D
C - choose the hypothesis deliberately
P[f(x) = +1] = p

(a) Can S produce a hypothesis that is guaranteed to perform better than random on any point outside D?

In case that all examples in D have yn = +1
(b) Is it possible that the hypothesis that C produces turns out to be better than the hypothesis that S produces?

(c) If p = 0.9, what is the probability that S will produce a better hypothesis than C?
Answer: P[P(Sy = f) > P(Cy = f)] where Sy is the output hypothesis of S, Cy is the output hypothesis of C
+ Since yn = +1, Sy = +1. Moreover, P[f(x) = +1] = 0.9 --> P(Sy = f) = 0.9
+ We have, P(Cy = +1) = 0.5, P(Cy = -1) = 0.5, P[f(x) = +1] = 0.9, P[f(x) = -1] = 0.1
--> P[Cy = f] = 0.5*0.9 + 0.5*0.1 = 0.5
Since 0.9 > 0.5, P[P(Sy = f) > P(Cy = f)] = 1

(d) Is there any value of p for which it is more likely than not that C will produce a better hypothesis than S?

I am not sure that my answer of (a) and for (c) is not conflict.

Thank You and Best Regards,