
#1




Problem 3.6

#2




Re: Problem 3.6
The point is to show that weights giving >0 implies the existence of weights giving >=1.
__________________
Have faith in probability 
#3




Re: Problem 3.6
I think I have an answer for parts b and c, but I'm not really sure if what I'm doing is correct. Can I check it with you?
For b), the thing we're trying to optimize is w, so then w would be z. So then we want to multiply the weights of the data (A) with w (z) to ensure that they are less than the signs of the data (y). I'm pretty sure that's right, however it feels a little handwavy to me, so even if it is right, I'm worried I may be missing some of the underlying logic of why it is the right answer. For c), then, we want to minimize error, so then I think (c^T)z is the summation of the errors, b=y(w^Tx) and Az = 1e (I couldn't figure out how to write xi in here). But ten we have z being related to the summation of the errors in one equation, but related to just a single error instance in the other, so I think that what I have can't be right, even if it makes sense to me. Can you offer a hint on what I'm doing wrong here? 
Thread Tools  
Display Modes  

