![]() |
#1
|
|||
|
|||
![]()
Suppose a dataset contains data from a process which changes over time.
First, we train, test, validate and regularize the in-sample data to create our model. Then we observe how well our model does for the 'customer' with the out of sample error. As time progresses, the 'customer' observes the predictions are wrong (the out of sample error increases). We determine this is due to a changes in the process which generates the data. If we then retrain, test, validate and regularize our model with the new data, do we try to select the dataset to correspond with the changes in the underlying process? Or, do we go back in time as far as possible, while adding all the new data (which incorporates datapoints with the changes to the underlying process) Thanks |
#2
|
|||
|
|||
![]()
If the data changes systematically over time (e.g. in the long run the stock market rises), the time stamp should be part of the data to learn from.
If the world has changed in an unexpected way since the original data was collected (e.g. spammers have devised new ways to get around spam filters), new data is necessary, and the fresher the data the better the prediction. |
![]() |
Tags |
dataset, machine learning |
Thread Tools | |
Display Modes | |
|
|