LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > Homework 6

Reply
 
Thread Tools Display Modes
  #1  
Old 05-14-2012, 03:41 AM
ladybird2012 ladybird2012 is offline
Member
 
Join Date: Apr 2012
Posts: 32
Default Questions on lecture 12

Hi,
I have 2 questions from lecture 12.

1) Slide 21: The graph on the RHS shows that when Qf=15 we need no regularizer. However, if I understand it right, this graph is based on the experiment performed on slide 13 of Lecture 11. On that slide we had overfitting when Qf>=10 since we were trying to fit the target with a tenth-order polynomial. So I would have assumed that for Qf>=10 we would need regularizer.... What am I missing?

2)Weight decay versus weight elimination for neural networks: I feel like these two regularizers are doing opposite things. Weight decay reduces the weights and favors small weights, but weight elimination favors bigger weights and eliminates small weights. So I guess these two regularizers are used under different conditions in neural networks -- could someone give me an example so I can pin it down? Are they ever both used in the same learning problem?

Thanks a lot in advance.
Reply With Quote
  #2  
Old 05-14-2012, 03:57 AM
yaser's Avatar
yaser yaser is offline
Caltech
 
Join Date: Aug 2009
Location: Pasadena, California, USA
Posts: 1,477
Default Re: Questions on lecture 12

Quote:
Originally Posted by ladybird2012 View Post
1) Slide 21: The graph on the RHS shows that when Qf=15 we need no regularizer. However, if I understand it right, this graph is based on the experiment performed on slide 13 of Lecture 11. On that slide we had overfitting when Qf>=10 since we were trying to fit the target with a tenth-order polynomial. So I would have assumed that for Qf>=10 we would need regularizer.... What am I missing?
The figure in slide 21 uses different parameters compared to the overfitting figures. The model being reguarized is 15th order, and there is zero stochastic noise in that part.

Quote:
2)Weight decay versus weight elimination for neural networks: I feel like these two regularizers are doing opposite things. Weight decay reduces the weights and favors small weights, but weight elimination favors bigger weights and eliminates small weights.
Weight elimination does not favor bigger weights. It tries to reduce all weights, but it has a bigger incentive to reduce small weights than to reduce big weights.
__________________
Where everyone thinks alike, no one thinks very much
Reply With Quote
  #3  
Old 05-14-2012, 04:18 AM
ladybird2012 ladybird2012 is offline
Member
 
Join Date: Apr 2012
Posts: 32
Default Re: Questions on lecture 12

Thanks for the quick reply
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 09:36 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.