LFD Book Forum Chapter 1 - Problem 1.3
 Register FAQ Calendar Mark Forums Read

#1
09-11-2013, 03:14 PM
 meixingdg Junior Member Join Date: Sep 2013 Posts: 4
Chapter 1 - Problem 1.3

I am a bit stuck on part b. I am not sure how to start. Could anyone give a nudge in the right direction?
#2
09-11-2013, 07:22 PM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 597
Re: Chapter 1 - Problem 1.3

Quote:
 Originally Posted by meixingdg I am a bit stuck on part b. I am not sure how to start. Could anyone give a nudge in the right direction?
The first part is following from the weight update rule for PLA. The second part follows from the first part using a standard induction proof.
__________________
Have faith in probability
#3
01-14-2015, 01:23 AM
 mxcnrawker Junior Member Join Date: Jan 2015 Posts: 1
Re: Chapter 1 - Problem 1.3

Can you please do the proof for this problem, I can answer the question conceptually but mathematically I'm having a little trouble starting my argument for both part a and part b please
#4
01-17-2015, 08:20 AM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 610
Re: Chapter 1 - Problem 1.3

Quote:
 Originally Posted by mxcnrawker Can you please do the proof for this problem, I can answer the question conceptually but mathematically I'm having a little trouble starting my argument for both part a and part b please
Part a and the first half of part b can almost be found on p14 here:

http://www.csie.ntu.edu.tw/~htlin/co...02_handout.pdf
__________________
When one teaches, two learn.
#5
07-20-2015, 03:11 AM
 yongxien Junior Member Join Date: Jun 2015 Posts: 8
Re: Chapter 1 - Problem 1.3

Hi I can solve the problem but I cannot understand how does this show that the perceptron algorithm will converge. Can somone explains to me what does the proof shows? I mean what does each step of the problems mean? Thanks
#6
07-22-2015, 07:57 AM
 htlin NTU Join Date: Aug 2009 Location: Taipei, Taiwan Posts: 610
Re: Chapter 1 - Problem 1.3

Quote:
 Originally Posted by yongxien Hi I can solve the problem but I cannot understand how does this show that the perceptron algorithm will converge. Can somone explains to me what does the proof shows? I mean what does each step of the problems mean? Thanks
The proof essentially shows that the (normalized) inner product between and the separating weights will be larger and larger in each iteration. But the normalized inner product is upper bounded by and cannot be arbitrarily large. Hence PLA will converge.
__________________
When one teaches, two learn.

 Thread Tools Display Modes Hybrid Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 08:25 AM.