LFD Book Forum  

Go Back   LFD Book Forum > Book Feedback - Learning From Data > Chapter 3 - The Linear Model

Reply
 
Thread Tools Display Modes
  #11  
Old 08-26-2012, 02:00 AM
itooam itooam is offline
Senior Member
 
Join Date: Jul 2012
Posts: 100
Default Re: Recency weighted regression

Suppose I could just transform the weight matrix A into a vector do a transpose then do a cross product (not sure how to present that in algebraic form but think that is the solution)!?
Reply With Quote
  #12  
Old 08-26-2012, 02:09 AM
itooam itooam is offline
Senior Member
 
Join Date: Jul 2012
Posts: 100
Default Re: Recency weighted regression

* I meant "inner" product above NOT cross product.
Reply With Quote
  #13  
Old 08-27-2012, 03:41 AM
itooam itooam is offline
Senior Member
 
Join Date: Jul 2012
Posts: 100
Default Re: Recency weighted regression

Scrap what I wrote above about large datasets causing havoc for the weight matrix. I found Octave already knows about such problems and has support for sparse matrices... very useful
Reply With Quote
  #14  
Old 08-27-2012, 12:08 PM
magdon's Avatar
magdon magdon is offline
RPI
 
Join Date: Aug 2009
Location: Troy, NY, USA.
Posts: 595
Default Re: Recency weighted regression

Yes, there is a closed form solution which is obtained by taking the \alpha_t into the square:

E_{in}=\sum_{t}\alpha_t(\mathbf{w}\cdot\mathbf{x}_t-y_t)^2=\sum_{t}(\mathbf{w}\cdot\mathbf{x}_t\sqrt{\alpha_t}-y_t\sqrt{\alpha_t})^2

This is exactly an unscaled linear regression problem where you have rescaled each data point (\mathbf{x}_t,y_t) by \sqrt{\alpha_t}. So, after you rescale your data in this way, you can just run your old regression algorithm without the weightings.

Quote:
Originally Posted by itooam View Post
Thank you for all your help it has been really appreciated. I have one final question, do you know if there is a closed form solution to

E_{in}=\sum_{t}\alpha_t(\mathbf{w}\cdot\mathbf{x}_t-y_t)^2

(assuming \alpha is a vector with the same number of rows as x?)

i.e., the closed form solution as used for linear regression and regularization - copied from lecture notes is this:

W_{reg} = (Z^{T} Z+\lambda I)^{-1}Z^Ty

I am not sure where \alpha would end up in the above, the derivation is beyond me mathematically?
__________________
Have faith in probability
Reply With Quote
  #15  
Old 08-28-2012, 03:15 AM
itooam itooam is offline
Senior Member
 
Join Date: Jul 2012
Posts: 100
Default Re: Recency weighted regression

Thanks Magdon, I always manage to make things so much more complicated than they need to be. That equation you posted would have saved me hours - and it is so simple - why didn't I think of it? Instead I went the long way round, not a total loss though as has been a great learning curve for me

I tried your approach and compared to my workings (in one of my previous posts):
W_{reg} = (Z^{T} A Z+\lambda I)^{-1}Z^TAy

and for all my tests I am getting the same W_{reg}. So this is great news as confirms my formula was correct too .

Many thanks, I can't say enough how much your help is appreciated.
Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 04:49 PM.


Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.