![]() |
#1
|
|||
|
|||
![]()
Hi,
When you select a kernel function for a support vector machine, do you 'peek' at the data? How do you select the kernel? If yes, does this count as 'data snooping'? Thanks |
#2
|
|||
|
|||
![]()
This answer isn't perfect by any means, but the libsvm FAQ offers this answer. If you decide in advance that you'll stop there no matter what, you haven't snooped.
If you want to try several kernels, use your selection as a parameter for validation. Then you snoop in the same way you might choose C, etc. If you have knowledge as to how the data are produced (as opposed to what the data are), this might help you pick a kernel. For example, if you expect without snooping that the response should be symmetric in two variables, pick a kernel such as RBF that takes advantage of it. Example: homework 2 problems 8-10 use an unknown f that is symmetric in x1 and x2. |
#3
|
|||
|
|||
![]()
If you're sticking with the same learning algorithm, I think you can account for the amount of snooping you're doing by expanding your hypothesis set. For example, if we had the non-linearly separable case in class where the target function was a circle. We could start with H1, so we have weights for x0, x1, and x2. If we see poor generalization (high Ein or Eval), we can then go to H2, so now we have:
phi = { 1, x1, x2, x1^2, x1*x2, x2^2 } Since H2 includes H1, these are counted towards the VC-dimension and we should be ok. Where we would get in trouble with snooping is doing something like realizing H1 didn't work and going to this: phi = {1, x1^2, x1*x2, x2^2} And not charging for using the x1 & x2, even though you already tested using them in a previous run |
![]() |
Tags |
data snooping, kernel methods, support vector machines |
Thread Tools | |
Display Modes | |
|
|