![]() |
#1
|
|||
|
|||
![]()
If the input distribution has high density near the target boundary, the sample will likely contain points near the boundary, so that large-margin or small-margin classifiers will be similar. If the input distribution has low density near the boundary, then the sample will have few near-boundary points, giving advantage to a large-margin classifier -- but then also, the probability of drawing a near-margin point during out-of-sample use is low, so E_out for low-margin classifiers is not much affected.
Why does this not limit the advantage of large-margin classifiers in practice? |
#2
|
||||
|
||||
![]()
You seem to have an interesting way of looking at the situation here, but I want to clarify the setup first. The sample here is the data set that is going to be used to train SVM, right? If so, can you explain why large and small margins will be similar in the above situation?
__________________
Where everyone thinks alike, no one thinks very much |
#3
|
|||
|
|||
![]() Quote:
|
#4
|
||||
|
||||
![]() Quote:
This observation does not affect the answers to Problems 8,9 one way or the other, since these problems only address which of the two methods is better, whether it is slightly better or significantly better.
__________________
Where everyone thinks alike, no one thinks very much |
#5
|
|||
|
|||
![]() Quote:
If few training points fall near the true boundary this could be because (1) dataset is too small, or (2) the underlying data distribution has low density near the boundary. If (1), then SVM has an advantage because it's more likely to track the true boundary than a random linear separator like PLA. If (2), then SVM still does better near the boundary, but the density of points there is so small that E_out is not much improved by getting them right. I guess in practice, (1) is more common? |
#6
|
|||
|
|||
![]()
In the problem, the points are uniform randomly distributed. With a smaller number of points, the gap is, statistically, larger. Given N points, the line that created the classification could be anywhere in the gap. The SVM solution should be close to the center of the gap. My guess is that PLA can also be anywhere in the gap.
Given that, you can see that the SVM solution should be closer more often, though not so easily to guess how often. |
![]() |
Thread Tools | |
Display Modes | |
|
|