LFD Book Forum

LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 3 - The Linear Model (http://book.caltech.edu/bookforum/forumdisplay.php?f=110)
-   -   Problem 3.6 (http://book.caltech.edu/bookforum/showthread.php?t=4441)

luwei0917 10-08-2013 07:43 PM

Problem 3.6
 
for part (a). why it is \ge 1 instead of 0? Thanks.

magdon 10-10-2013 09:15 AM

Re: Problem 3.6
 
The point is to show that weights giving >0 implies the existence of weights giving >=1.
Quote:

Originally Posted by luwei0917 (Post 11557)
for part (a). why it is \ge 1 instead of 0? Thanks.


squidsforbreakfast 10-09-2014 10:09 AM

Re: Problem 3.6
 
I think I have an answer for parts b and c, but I'm not really sure if what I'm doing is correct. Can I check it with you?
For b), the thing we're trying to optimize is w, so then w would be z. So then we want to multiply the weights of the data (A) with w (z) to ensure that they are less than the signs of the data (y). I'm pretty sure that's right, however it feels a little hand-wavy to me, so even if it is right, I'm worried I may be missing some of the underlying logic of why it is the right answer.
For c), then, we want to minimize error, so then I think (c^T)z is the summation of the errors, b=y(w^Tx) and Az = 1-e (I couldn't figure out how to write xi in here).
But ten we have z being related to the summation of the errors in one equation, but related to just a single error instance in the other, so I think that what I have can't be right, even if it makes sense to me.
Can you offer a hint on what I'm doing wrong here?


All times are GMT -7. The time now is 05:17 PM.

Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.