![]() |
#1
|
|||
|
|||
![]()
In the lectures, it's mentioned that often with an SVM you get the happy news that you have just a few support vectors, and you therefore know that your VC dimension is small and you can expect good generalization.
But how many vectors do you need before you're unhappy? Let's suppose you have a dataset with 7000 observations and 550 variables, an easy supposition for me because I do. Suppose you run an SVM and you discover that with a linear kernel, you have some 700 support vectors, and with a radial kernel, you have some 2000. That seems like a lot; nearly half the points are support vectors. But if you attack the problem with another machine learning method like neural nets or multinomial regression, you will also have one or two thousand parameters, or maybe more, so you will also have a big VC dimension. So maybe you should be happy with the 2000 support vectors if you look like you're getting good generalization in cross-validation? Or you should be happy regardless of the number of support vectors, if the cross-validation shows good generalization? Or VC dimension doesn't matter at all if the cross-validation news is good? |
#2
|
||||
|
||||
![]() Quote:
![]() Of course there are situations where neither SVM nor other models will perform to a satisfactory level, as we would expect if the resources of data are not adequate to capture the complexity of the target. Quote:
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]()
Mentioning the VC dimension brings up something I considered briefly.
It's been said that when we make decisions based on seeing the data we should account for all of the options we considered when we're thinking about generalization. For example in the extreme case of data snooping, or in the lesser case where we should account for the fact that cross-validation adds a little bit of contamination. But what about, say, a "failed" SVM? For example, we try the SVM hypothesis and get back 500 support vectors out of 1000, then decide to change the model because the first one won't generalize. Realistically, if I then go to a different kernel or a neural network or something else, it doesn't care whether it was run before or after another model, it will produce the same result. But I could also see the interpretation where the SVM model counts as a space I explored in a similar way to tweaking parameters based on data. To what degree is that the case? Presumably there would be a tradeoff of accepting the weak model vs. accepting weaker generalization if any, which I guess could probably be automated, too. |
#4
|
|||
|
|||
![]() Quote:
![]() ![]() ![]() ![]() where the size of the data set is around 400. I would be tempted to choose ![]() ![]() |
#5
|
||||
|
||||
![]() Quote:
__________________
Where everyone thinks alike, no one thinks very much |
#6
|
||||
|
||||
![]() Quote:
![]()
__________________
Where everyone thinks alike, no one thinks very much |
#7
|
|||
|
|||
![]() Quote:
Thanks for your clarifications -- I have some questions in the context of this thread and problems 2,3 etc of the homework, and also the generalization bounds. 1. Just a reiteration for clarity -- for the bound at end of Lec 14 as applied to soft SVMs, by number of SVs we mean the number of margin SVs right? That would be consistent with the thought process that the non-margin SVs end up going to the constraints (0 or C) and so arent getting full 'freedom of expression' and therefore arent 'independent params' 2. I have been trying the problems 2,3 etc. using cvxopt : I used a simplistic rule that if an alpha is very close (say within a range of a0 = 10^-5) to 0 or C I respectively round it to 0 or C; and the remaining alphas become margin SVs. If the b's corresponding to these are consistent (meaning, within a range of b0 = 10^-3 of each other) then I conclude that there is nothing fundamentally unsound in what I am doing. I came to a0 and b0 with some trial and error. Is this a sound approach? 3. I would imagine the ranges a0 and b0 can be derived in some principled way -- for instance I didnt really account for relation between a0 and b0 and its relationship to cvxopt's numeric error tolerance -- such a principled choice of support vectors would be part of what packages like libsvm provide -- is that correct? 4. I found that in problem 2 as well as 3, some of the classifiers have single digit number of margin SVs and some ran into multiple thousands -- I am somewhat uncomfortable about this huge variation but a visual perusal of the thousands of distinct evaluations of b indicated they are all close to each other; and moreover the same code generates numbers that are essentially consistent with the margin support vectors and Ein discussed in the classification problem in the thread http://book.caltech.edu/bookforum/showthread.php?t=4044 So I am hoping I am right in assuming that the distinct values of b's being close to each other are a good indicator of soundness. Any comments? 5. Finally in the context of the discussion in Lecture 17 on Data Snooping -- We would have to use upto 100 SVMs (one versus one) or 10 SVMs (one versus rest) for our 10-way classification problem. The number of margin SVs ranges from single digit to thousands. In this context, how to view the theoretical generalization bounds? They would seem to fail the thumb rule of ratio of 10. But if the Eout on actual test data is low, we can still go ahead with this ? |
#8
|
|||||
|
|||||
![]() Quote:
Quote:
Quote:
Quote:
Quote:
Hope this helps.
__________________
When one teaches, two learn. |
#9
|
|||
|
|||
![]()
Many thanks for your reply, Prof. Lin.
Quote:
|
#10
|
|||
|
|||
![]() Quote:
I found that even in some of the HW cases (1 versus 5 classification, Q=5 cases) I didnt get round to a set of thresholds that are not significantly different -- I was wondering if that's unexpected or is indicative of a bug in my implementation. On the other hand if I used the above averaging approach (and used some heuristic a0 and b0 to decide the margin SVs) I would probably not be aware of this discrepancy in b values in the first place. Is there a way to get around this? |
![]() |
Thread Tools | |
Display Modes | |
|
|