LFD Book Forum (http://book.caltech.edu/bookforum/index.php)
-   Chapter 3 - The Linear Model (http://book.caltech.edu/bookforum/forumdisplay.php?f=110)
-   -   Recency weighted regression (http://book.caltech.edu/bookforum/showthread.php?t=1090)

 itooam 08-22-2012 01:44 AM

Recency weighted regression

Hi,

I wondered if anyone could help with the following:

(I'll make up a fictional example to explain in simple terms what I am trying to do):
If for example you created an extremely simple model that was to predict whether a share price was to rise or fall (for now we'll consider as a linear classification model) and the only inputs you had were:
X0 = 1
X1 = yesterday's share price
X2 = the share price the day before that in X1
X3 = the share price the day before that in X2
X4 = the share price the day before that in X3

it would seem sensible to apply more of a weighting to the more recent share prices so you may decide to do a transform before applying the learning i.e.,
you may create a new matrix Z = [X0 X1*0.9 X2*0.8 X3*0.7 X4*0.6]
and do the learning from Z.

Hope this makes sense so far?

My questions:

1) is this a sensible thing to do?

2) can the recency weights i.e., 0.9, 0.8, 0.7 and 0.6 be learned?

Though this is a simple example, you may have more data each day for which you want to apply the same recency weighting i.e., you may have data for say (i) the minimum and (ii) the maximum price the share was on each day. In which case you may have a new model something like:

X0 = 1
X1 = yesterday's share price
X1_1 = the minimum price the share traded at yesterday
X1_2 = the maximum price the share traded at yesterday

X2 = the share price the day before that in X1
X2_1 = the minimum price the share traded the day before that in X1
X2_2 = the maximum price the share traded the day before that in X1

X3 = the share price the day before that in X2
X3_1 = the minimum price the share traded the day before that in X2
X3_2 = the maximum price the share traded the day before that in X2

X4 = the share price the day before that in X3
X4_1 = the minimum price the share traded the day before that in X3
X4_2 = the maximum price the share traded the day before that in X3

applying a new transform would be like this:
Z = [X0 X1*0.9 X1_1*0.9 X1_2*0.9 X2*0.8 X2_1*0.8 X2_3*0.8 X3*0.7 X3_1*0.7 X3_2*0.7 X4*0.6 X4_1*0.6 X4_2*0.6]

Hope this is still making sense?

Extra questions:
3) is this still (if it was before) a sensible thing to do?
4) can the recency weights i.e., 0.9, 0.8, 0.7 and 0.6 be learned?

Any pointers, discussion, answers much appreciated.

 magdon 08-23-2012 05:13 AM

Re: Recency weighted regression

Unfortunately, if you are using a linear model, performing this recency weighting as you suggest will have no effect because you are going to rescale the input-variables by weights and so this rescaling will get absorbed into the weights.

Suppose when you learn without rescaling you find weight ; now, when you rescale , your learned weight will just rescale in the inverse way
; your in-sample error will be the same, as will your out-of-sample error.

You may have misunderstood the purpose of recency weighted regression; it is to differentially weight the error on different data points. In your case of stock prediction, it makes sense to weight the prediction error on the recenct days more than the prediction error on earlier days, hence the term recency weighted regression. Thus, if you let the input on day be ; the thing you are trying to predict on day is and the weights you learn are then the recency weighted error measure that one might wish to minimize is

are the weights; to emphasize the recent data points more, you would chose to be increasing with .

Quote:

 itooam 08-24-2012 08:30 AM

Re: Recency weighted regression

Thanks for your response Dr Magdon it is really appreciated. I hope you don't mind me asking more questions (I suppose you won't answer if you don't want to lol).

Quote:
 Originally Posted by magdon (Post 4318) Unfortunately, if you are using a linear model, performing this recency weighting as you suggest will have no effect because you are going to rescale the input-variables by weights and so this rescaling will get absorbed into the weights. Suppose when you learn without rescaling you find weight ; now, when you rescale , your learned weight will just rescale in the inverse way ; your in-sample error will be the same, as will your out-of-sample error.
Thanks for your explanation, you confirmed my main fear. This is great because I can now avoid this route.

Quote:
 Originally Posted by magdon (Post 4318) You may have misunderstood the purpose of recency weighted regression; it is to differentially weight the error on different data points. In your case of stock prediction, it makes sense to weight the prediction error on the recenct days more than the prediction error on earlier days, hence the term recency weighted regression.
I haven't read your book just doing the online course, I see this thread has been moved from the "general homework" forum to "Chapter 3" of the book forum. If "recency weightings" are explained in your book (please could you confirm?) then I will scour the earth for your book as this area is of much interest. Previously I looked for your book on Amazon.co.uk but couldn't find, maybe I can order internationally through .com or some other shop.

Quote:
 Originally Posted by magdon (Post 4318) Thus, if you let the input on day be ; the thing you are trying to predict on day is and the weights you learn are then the recency weighted error measure that one might wish to minimize is are the weights; to emphasize the recent data points more, you would chose to be increasing with .
Though this looks a good solution to my problem I am not sure it would work with what I am trying to do... to add to my example:

If I had a number of different company shares in my database and for each company I had 1000 days of their share price data, I would therefore be able to create approximately 996 training rows per company. Each training row containing the previous 4 days prices.

To make simple, also assume I have managed to normalize each company's share prices so that they can be trained together (don't ask me how, this is just a made up example lol :D )

So because of this, I think I still need something along the lines of:
[X0 X1 X1_1 X1_2 X2 X2_1 X2_3 X3 X3_1 X3_2 X4 X4_1 X4_2] per training row and a value of y which we will compare against.

Going back to what you wrote, the recency weightings that I made up are useless here as they would be absorbed however, the learning algorithm would still pick up the important variables and thus give them the higher weights so in a way, I would hope the learning algorithm would implicitly work out anyway that the more "recent" variables would get larger weights i.e., W1... > W2... > W3... > W4 relatively speaking. Though I am sure when I test such a case it probably won't be as clean cut due to the problems associated with VC and degrees of freedom.

Using the recency weights on the error as you suggested is a more failsafe way however I think I would then lose the structure I was hoping to use? Please can you confirm this in light of the additional model information I have presented? If so, maybe the following would work instead?

I just do a linear regression on the entire (1000xNoOfCompanies) rows so that each day is treated independently, once I have found my optimum weights I use them to calculate
for each row (I'm not sure about the squared bit?). These new values will then be grouped into a single row based on the "day" structure i.e.,
X0 = 1
X1 =
X2 =
X3 =
X4 =

a second bout of linear regression could then be used to work out the optimum "recency weights" for this new set (996xNoOfCompanies rows).

This second idea, or yours (if still applicable wrt desired model structure?), would certainly help in terms of reducing degrees of freedom and so would definitely be preferable imo.

 magdon 08-24-2012 11:55 PM

Re: Recency weighted regression

Quote:
 Originally Posted by itooam (Post 4259) I haven't read your book just doing the online course, I see this thread has been moved from the "general homework" forum to "Chapter 3" of the book forum. If "recency weightings" are explained in your book (please could you confirm?) then I will scour the earth for your book as this area is of much interest. Previously I looked for your book on Amazon.co.uk but couldn't find, maybe I can order internationally through .com or some other shop.
The book does not specifically cover weighted regression; but it does cover linear models in depth. And yes, you can find the book on amazon.com; unfortunately it is not available on amazon.co.uk.

With respect to your question though, you seem to be confusing two notions of recency:

Let's take a simple example of one stock, which can generalize to the multiple stocks example. Suppose the stock's price time series is

At time for you construct the input

and the target . You would like to understand the relationship between and . If you know this relationship, you are can predict the future price from previous prices. So suppose you build a linear predictor

.

The learning task is to determine . To do this you minimize

You will probably find that the weights in are not uniform. For example the weight multiplying might be the largest; this means that the most recent price is the most useful in predicting the next price .

The notion of recency above should not be confused with recency weighted regression which is catering to the fact that the weights may be changing with time (that is in the stock example, the time series is non-stationary). To accomodate this fact you re-weight the data points giving more weight to the more recent data points. Thus you minimize the error function

The enforce that the more recent data points will have more contribution to and so you will choose a that better predicts on the more recent data points; in this way older data points play some role, but more recent data points play the dominant role in determining how to predict tomorrow's price.

Thus in the example of time series prediction, there are these two notions of recency at play:

(i) more recent prices are more useful for predicting tomorrows price

(ii) the relationship between this more recent price and tomorrows price is changing with time (for example sometimes it is trend following, and sometimes reversion). In this case, more recent data should be used to determine the relationship between today's price and tomorrow's price.

 itooam 08-25-2012 03:24 AM

Re: Recency weighted regression

Thanks again for the thorough response Dr Magdon. I think we are talking along the same lines just a bit is lost in translation - one of the disadvantages of written communication. I apologies for my wording though, I don't mean to confuse; I used the words "Recency weighted regression" without knowing that this generally means something else in the machine learning literature.

I also think I now understand more clearly the application of
so thanks again for explaining. I think I need to read up on this more as this makes me question: "how do I measure how well this recency weighting would have performed in the past?". I assume to answer this you would need to loop through the above formula starting from an arbitrary start date i.e., starting with a dataset equal to the rule of thumb: 10 x DegreesOfFreedom e.g., in context of the simplest model () we would start with a dataset of the first 50 days... pseudocode:

for i=50 to 996 step 1
....... = wholeDataSet[items 1 to i]
.......do the regression on and find by minimising
.......error = error +
endfor

 itooam 08-25-2012 03:35 AM

Re: Recency weighted regression

If above is correct, there are many way's in which I can trial my underlying project.
Both
1)

and
2)

are worth a try... and also a variation of 2) without x in the form: , just input variables of x that are only applicable at that time t.

 magdon 08-25-2012 05:38 AM

Re: Recency weighted regression

Yes, that would be a way to run the process and estimate how good the predictor is.

Quote:
 Originally Posted by itooam (Post 4416) Thanks again for the thorough response Dr Magdon. I think we are talking along the same lines just a bit is lost in translation - one of the disadvantages of written communication. I apologies for my wording though, I don't mean to confuse; I used the words "Recency weighted regression" without knowing that this generally means something else in the machine learning literature. I also think I now understand more clearly the application of so thanks again for explaining. I think I need to read up on this more as this makes me question: "how do I measure how well this recency weighting would have performed in the past?". I assume to answer this you would need to loop through the above formula starting from an arbitrary start date i.e., starting with a dataset equal to the rule of thumb: 10 x DegreesOfFreedom e.g., in context of the simplest model () we would start with a dataset of the first 50 days... pseudocode: for i=50 to 996 step 1 ....... = wholeDataSet[items 1 to i] .......do the regression on and find by minimising .......error = error + endfor

 itooam 08-25-2012 08:26 AM

Re: Recency weighted regression

Thank you for all your help it has been really appreciated. I have one final question, do you know if there is a closed form solution to

(assuming is a vector with the same number of rows as x?)

i.e., the closed form solution as used for linear regression and regularization - copied from lecture notes is this:

I am not sure where would end up in the above, the derivation is beyond me mathematically?

 itooam 08-26-2012 01:37 AM

Re: Recency weighted regression

Having spent some time on this, (this area of maths I am very weak)

I think the solution is:

Where is a diagonal matrix. A bit like the Identity matrix but with weight values i.e.,
|, 0, 0, ... 0|
|0, , 0 ... 0|
|..................|
|0, 0, 0 .... |

The bit that makes this tricky (for me) is the regularisation. I suppose I could test the above using this formula and then try the same using gradient descent (where I know it will be correct) if the values are close then the above can be considered correct (if I plug in largely varying values of lamba for testing).

 itooam 08-26-2012 01:41 AM

Re: Recency weighted regression

If the above is correct, it seems then there is another problem... if the dataset size is big i.e., 10000 then the matrix will contain = 100,000,000 values. Ugh! How to deal with this?

 itooam 08-26-2012 02:00 AM

Re: Recency weighted regression

Suppose I could just transform the weight matrix into a vector do a transpose then do a cross product (not sure how to present that in algebraic form but think that is the solution)!?

 itooam 08-26-2012 02:09 AM

Re: Recency weighted regression

* I meant "inner" product above NOT cross product.

 itooam 08-27-2012 03:41 AM

Re: Recency weighted regression

Scrap what I wrote above about large datasets causing havoc for the weight matrix. I found Octave already knows about such problems and has support for sparse matrices... very useful :)

 magdon 08-27-2012 12:08 PM

Re: Recency weighted regression

Yes, there is a closed form solution which is obtained by taking the into the square:

This is exactly an unscaled linear regression problem where you have rescaled each data point by . So, after you rescale your data in this way, you can just run your old regression algorithm without the weightings.

Quote:
 Originally Posted by itooam (Post 4423) Thank you for all your help it has been really appreciated. I have one final question, do you know if there is a closed form solution to (assuming is a vector with the same number of rows as x?) i.e., the closed form solution as used for linear regression and regularization - copied from lecture notes is this: I am not sure where would end up in the above, the derivation is beyond me mathematically?

 itooam 08-28-2012 03:15 AM

Re: Recency weighted regression

Thanks Magdon, I always manage to make things so much more complicated than they need to be. That equation you posted would have saved me hours - and it is so simple - why didn't I think of it? Instead I went the long way round, not a total loss though as has been a great learning curve for me :)

I tried your approach and compared to my workings (in one of my previous posts):

and for all my tests I am getting the same . So this is great news as confirms my formula was correct too :D.

Many thanks, I can't say enough how much your help is appreciated.

 All times are GMT -7. The time now is 03:09 AM.