Thread: Recency weighted regression View Single Post
#2 magdon RPI Join Date: Aug 2009 Location: Troy, NY, USA. Posts: 595 Re: Recency weighted regression

Unfortunately, if you are using a linear model, performing this recency weighting as you suggest will have no effect because you are going to rescale the input-variables by weights and so this rescaling will get absorbed into the weights.

Suppose when you learn without rescaling you find weight ; now, when you rescale , your learned weight will just rescale in the inverse way ; your in-sample error will be the same, as will your out-of-sample error.

You may have misunderstood the purpose of recency weighted regression; it is to differentially weight the error on different data points. In your case of stock prediction, it makes sense to weight the prediction error on the recenct days more than the prediction error on earlier days, hence the term recency weighted regression. Thus, if you let the input on day be ; the thing you are trying to predict on day is and the weights you learn are then the recency weighted error measure that one might wish to minimize is  are the weights; to emphasize the recent data points more, you would chose to be increasing with .

Quote:
 Originally Posted by itooam Hi, I wondered if anyone could help with the following: (I'll make up a fictional example to explain in simple terms what I am trying to do): If for example you created an extremely simple model that was to predict whether a share price was to rise or fall (for now we'll consider as a linear classification model) and the only inputs you had were: X0 = 1 X1 = yesterday's share price X2 = the share price the day before that in X1 X3 = the share price the day before that in X2 X4 = the share price the day before that in X3 it would seem sensible to apply more of a weighting to the more recent share prices so you may decide to do a transform before applying the learning i.e., you may create a new matrix Z = [X0 X1*0.9 X2*0.8 X3*0.7 X4*0.6] and do the learning from Z. Hope this makes sense so far? My questions: 1) is this a sensible thing to do? 2) can the recency weights i.e., 0.9, 0.8, 0.7 and 0.6 be learned? More Advanced: Though this is a simple example, you may have more data each day for which you want to apply the same recency weighting i.e., you may have data for say (i) the minimum and (ii) the maximum price the share was on each day. In which case you may have a new model something like: X0 = 1 X1 = yesterday's share price X1_1 = the minimum price the share traded at yesterday X1_2 = the maximum price the share traded at yesterday X2 = the share price the day before that in X1 X2_1 = the minimum price the share traded the day before that in X1 X2_2 = the maximum price the share traded the day before that in X1 X3 = the share price the day before that in X2 X3_1 = the minimum price the share traded the day before that in X2 X3_2 = the maximum price the share traded the day before that in X2 X4 = the share price the day before that in X3 X4_1 = the minimum price the share traded the day before that in X3 X4_2 = the maximum price the share traded the day before that in X3 applying a new transform would be like this: Z = [X0 X1*0.9 X1_1*0.9 X1_2*0.9 X2*0.8 X2_1*0.8 X2_3*0.8 X3*0.7 X3_1*0.7 X3_2*0.7 X4*0.6 X4_1*0.6 X4_2*0.6] Hope this is still making sense? Extra questions: 3) is this still (if it was before) a sensible thing to do? 4) can the recency weights i.e., 0.9, 0.8, 0.7 and 0.6 be learned? Any pointers, discussion, answers much appreciated.
__________________
Have faith in probability