#1




Chapter 1  Problem 1.3
I am a bit stuck on part b. I am not sure how to start. Could anyone give a nudge in the right direction?

#2




Re: Chapter 1  Problem 1.3
The first part is following from the weight update rule for PLA. The second part follows from the first part using a standard induction proof.
__________________
Have faith in probability 
#3




Re: Chapter 1  Problem 1.3
Can you please do the proof for this problem, I can answer the question conceptually but mathematically I'm having a little trouble starting my argument for both part a and part b please

#4




Re: Chapter 1  Problem 1.3
Quote:
http://www.csie.ntu.edu.tw/~htlin/co...02_handout.pdf
__________________
When one teaches, two learn. 
#5




Re: Chapter 1  Problem 1.3
Hi I can solve the problem but I cannot understand how does this show that the perceptron algorithm will converge. Can somone explains to me what does the proof shows? I mean what does each step of the problems mean? Thanks

#6




Re: Chapter 1  Problem 1.3
Quote:
__________________
When one teaches, two learn. 
#7




Re: Chapter 1  Problem 1.3
Despite the slides I still have difficulty reading the equations.
In my PLA program weights are updated by "the difference between the 'target function line' and x2" for a misclassified example from a 2dimensional space. Example target function line: 2 + 3x. If x1 = 3 en x2 = 9, y = 9 (2+3*3) = 2 If misclassified the weights would be updated like: wt+1 = wt + x1 * 2 The method above maybe omits advantages of vector computation(?) as seen in the slides, but I was happy the simulation worked at all The theoretic approach of this course seems more useful in the long term than 'simply learning to type methods', but for me is new and challenging. So my questions are:  is p a random symbol? I can't find it in the overview.  does min1 < n < N stand for the sum of function (x) in range N?  is yn the positive or negative difference between the target line and coordinate x2 (staying with the 2dimensional graphical model)?  I understand a little simple linear algebra for linear regression. Are vector computations making the understanding of this PLA equation easier? Thanks in advance! 
#8




Re: Chapter 1  Problem 1.3
Hi, I am stuck at the part (e). I have two questions:
1  Is R' a typo? If R' is R then: 2  Refer to (b), I observe that: Refer to (c), I observe that: But I think that happens only when t <= 1? Am I mistaken somewhere? 
#9




Re: Chapter 1  Problem 1.3
Quote:
2. (c) works well for t>=0. Do not refer from (b) to (c). Try to refer to the previously proven inequality (so, from (c) to (b)). 
#10




Re: Chapter 1  Problem 1.3
Aha! I see the point now! Thank you very much!!!

Thread Tools  
Display Modes  

