![]() |
|
#1
|
|||
|
|||
![]()
I was recently thinking about the Facebook friend suggesting algorithm,
though I think that the problem could also apply to Netflix. The assumption is that data points are independent, and so contribute equally to the solution. In the FB case, if I am friends with more than one person in a family, it has a strong tendency to suggest other friends of the family, stronger than it should. (Though FB doesn't necessarily know that they are related.) In the Netflix case, if someone likes Spiderman 1, Spiderman 2, and Spiderman 3, that really isn't three independent samples. On the other hand, Spiderman 1 and Batman 1 should be considered more independent. It seems to me that there should be enough in the data to extract some of this dependence. |
#2
|
||||
|
||||
![]()
We should distinguish between similar inputs and non-independent inputs. If I am trying to learn
![]() ![]() ![]() ![]() ![]() ![]() ![]() Similar does not mean non-independent. However, in the Netflix example, there are subtle problems that you may be alluding to. Think about how a user chooses movies to rent. They have their tastes so they have a tendency to select movies of a certain type. This is how the training data is generated. Now Netflix would like to learn to predict movie rating for the viewer. However, if Netflix selects a movie at random and rates it for the viewer, then the test point is not from the same distribution as the training data. If, on the other hand, the viewer selected a movie and asked for a rating, then this test point is from the same distribution as the training data. So one must be careful. Quote:
__________________
Have faith in probability |
#3
|
|||
|
|||
![]() Quote:
Quote:
Now, one way to account for this is to realize that two people are related, or that one movie is a sequel of another, but it should also be in the data. Say, for example, that the data show that everyone who watched Spiderman 2 had also watched Spiderman 1, and, for the sake of this discussion, vice versa. It should, then, be completely obvious that there is no additional information from the fact that someone watched both. The combination should have weight 1.0 instead of 2.0 in any calculation. If not everyone watched both, but many did, then the weight should be between 1.0 and 2.0. |
#4
|
||||
|
||||
![]()
Yes, if you select spiderman 2 because you first selected spiderman 1 then this is indeed non-independent sampling which is even worse than just having a mismatch between training and test probability distributions. In such cases, there may even be "effectively fewer" data points whenever non-independent sampling takes place.
__________________
Have faith in probability |
![]() |
Tags |
independence |
Thread Tools | |
Display Modes | |
|
|