LFD Book Forum  

Go Back   LFD Book Forum > Course Discussions > Online LFD course > The Final

Reply
 
Thread Tools Display Modes
  #1  
Old 09-09-2012, 04:58 PM
victe victe is offline
Junior Member
 
Join Date: Aug 2012
Posts: 5
Default Question 10

From the results of running 1 vs 5, I met one of the possible answers, but it seems that is not correct. I can not say anything more for now... Maybe I made a mistake, but in the other questions related it seems I have the right answers.
Reply With Quote
  #2  
Old 09-10-2012, 05:46 AM
TonySuarez TonySuarez is offline
Member
 
Join Date: Jul 2012
Location: Lisboa, Portugal
Posts: 35
Default Re: Question 10

My results also ended up pointing into only one of the hypothesis. But there can be some mistake in the code -- only 100% sure after submission . In fact, the results are not of the kind that would let yourself go into "relaxed" state .
Reply With Quote
  #3  
Old 09-12-2012, 11:59 AM
MLearning MLearning is offline
Senior Member
 
Join Date: Jul 2012
Posts: 56
Default Re: Question 10

The same here, simulation points to one of the answers. For some reason, I am not able to take comfort in that. I also did run my simulation for different values of lambda and they all seem to point to that same answer.
Reply With Quote
  #4  
Old 03-11-2013, 01:32 PM
jain.anand@tcs.com jain.anand@tcs.com is offline
Member
 
Join Date: Feb 2013
Location: Cleveland, OH
Posts: 11
Default Re: Question 10

When i ran the simulation I am getting 2 answers that matches my result. Now I am really confused how to proceed. Can somebody help me which one I should select? I am getting these answers repeatedly. Is there some rule on how many iterations Gradient descent should be run? I see the gradient descent error keeps decreasing even after 2000 runs. It make no difference in the outcome though.
Reply With Quote
  #5  
Old 03-11-2013, 07:02 PM
hemphill hemphill is offline
Member
 
Join Date: Jan 2013
Posts: 18
Default Re: Question 10

If the error keeps dropping, I would keep going. I didn't use gradient descent, myself. I solved it two ways, getting the same answer with both methods. First, I noted that it could be solved by quadratic programming. Second, I fed it to a (non-gradient) conjugate direction set routine that I've had lying around for years.
Reply With Quote
  #6  
Old 03-11-2013, 10:50 PM
tsweetser tsweetser is offline
Junior Member
 
Join Date: Mar 2013
Posts: 2
Default Re: Question 10

Is it also possible to use the regularized normal equation? I'm looking at Lecture 12, Slide 11.

It seems funny to me to choose the parameters to minimize one error measure (mean square), yet evaluate {E_in} and {E_out} using another (binary classification).
Reply With Quote
Reply

Tags
question 10

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 07:47 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.