LFD Book Forum Exercise 3.4

#11
10-07-2013, 09:27 AM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 3.4

is not , but that is close. Recall

Quote:
 Originally Posted by aaoam I'm having a bit of difficulty with 3.4b. I take \hat(y) - y and multiply by (XX^T)^{-1}XX^T, which ends up reducing the expression to just H\epsilon. However, then I can't use 3.3c in simplifying 3.3c, which makes me think I did something wrong. Can somebody give me a pointer? Also, it'd be great if there was instructions somewhere about how to post in math mode. Perhaps I just missed them?
__________________
Have faith in probability
#12
10-07-2013, 07:48 PM
 Sweater Monkey Junior Member Join Date: Sep 2013 Posts: 6
Re: Exercise 3.4

Quote:
 Originally Posted by magdon Yes, that is right. You have to be more careful but use similar reasoning with
Ahhhh, yes I see now why doesn't have a factor of N! The trace of this matrix is just .

Thanks Professor
#13
10-07-2013, 10:54 PM
 smiling_assassin Junior Member Join Date: Oct 2013 Posts: 1
Re: Exercise 3.4

Quote:
 Originally Posted by Sweater Monkey Ahhhh, yes I see now why doesn't have a factor of N! The trace of this matrix is just . Thanks Professor

But isn't a matrix? So trace would be instead of ? I know is . What am I missing?
#14
10-08-2013, 07:39 AM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 3.4

You are right, is an matrix. But its trace is not . You may consider looking through Exercise 3.3, and in particular, part (d) should be helpful.

Quote:
 Originally Posted by smiling_assassin But isn't a matrix? So trace would be instead of ? I know is . What am I missing?
__________________
Have faith in probability
#15
10-09-2013, 02:05 PM
 meixingdg Junior Member Join Date: Sep 2013 Posts: 4
Re: Exercise 3.4

For part (c), would the result of (y-hat - y) (from part b) be Ein(wlin) in terms of epsilon, since (y-hat - y) is the in-sample error?
#16
10-10-2013, 09:11 AM
 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595
Re: Exercise 3.4

y and y-hat are vectors. The norm-squared of (y-hat - y) divided by N is the in-sample error.

Quote:
 Originally Posted by meixingdg For part (c), would the result of (y-hat - y) (from part b) be Ein(wlin) in terms of epsilon, since (y-hat - y) is the in-sample error?
__________________
Have faith in probability
#17
11-10-2013, 04:27 PM
 jamesclyeh Junior Member Join Date: Nov 2013 Posts: 1
Re: Exercise 3.4

Hi,

For part (a), in one of the last steps I did:

Rearrange:
Since ,

Are these steps correct?
I found subbing back in a bit recursive because I previously solved for and plugged that in to get .

Also for (b)
Is the answer <---I ll delete this once its confirmed.

Thanks,
James
#18
11-17-2013, 03:22 AM
 yaser Caltech Join Date: Aug 2009 Location: Pasadena, California, USA Posts: 1,477
Re: Exercise 3.4

Hi James,

I am slow in responding this term as I am attending to the edX forum, but here are my quick comments:

For part (a), why is (what happened to the added noise)?

For part (b), your formula is correct.
__________________
Where everyone thinks alike, no one thinks very much

 Thread Tools Display Modes Linear Mode

 Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
 Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General     General Discussion of Machine Learning     Free Additional Material         Dynamic e-Chapters         Dynamic e-Appendices Course Discussions     Online LFD course         General comments on the course         Homework 1         Homework 2         Homework 3         Homework 4         Homework 5         Homework 6         Homework 7         Homework 8         The Final         Create New Homework Problems Book Feedback - Learning From Data     General comments on the book     Chapter 1 - The Learning Problem     Chapter 2 - Training versus Testing     Chapter 3 - The Linear Model     Chapter 4 - Overfitting     Chapter 5 - Three Learning Principles     e-Chapter 6 - Similarity Based Methods     e-Chapter 7 - Neural Networks     e-Chapter 8 - Support Vector Machines     e-Chapter 9 - Learning Aides     Appendix and Notation     e-Appendices

All times are GMT -7. The time now is 08:32 PM.