#11




Re: Q9, SVM vs PLA
@jlaurentum:
These are the parameters I fed to ipop: Code:
H = sweep(XIn[,2:3],MARGIN=1,yIn, '*') c = matrix(rep(1,n)) A = t(yIn) b = 0 l = matrix(rep(0,n)) u = matrix(rep(1e7,n)) r = 0 sv = ipop(c,H,A,b,l,u,r) Last edited by catherine; 05232013 at 03:32 AM. Reason: more details 
#12




Re: Q9, SVM vs PLA
Quote:
The parameter is just a vector of upper bounds for inequalities, but our problem only has lower bounds. I wanted to use a vector of Infs, but ipop didn't like that, so I just played around to find a value for that would work. For some reason I found extremely large values gave errors, but large ones (like the one you used worked fine). I don't know why either. As you probably realised, all you need to check is that none of the alphas attains the upper bound you use. If that is the case, the upper bounds have had no effect. How did you arrive at your choice of ? 
#13




Re: Q9, SVM vs PLA
Quote:

#14




Re: Q9, SVM vs PLA
I just followed slide 15 (The solution  quadratic programming) and the documentation of the kernlab package.

#15




Re: Q9, SVM vs PLA
Quote:
More straightforward is to make the sample big enough. 1000 is a long way short of what you need, because all except 1020 of those points are accurately classified by both algorithms. The uncertainty in estimates is quite apparent. Suppose you have a method and want to estimate its accuracy. In a number of runs you find an average of 10 of 1000 random points are misclassified. Each point is a perfectly random sample from a distribution which has about 1% of one value and 99% of the other. In a single run, there is huge uncertainty on this estimate: getting 5 or 15 misclassified points is going to happen. Because this is happening with the misclassified points for each of the two methods, the uncertainty in the difference between them is even larger. The consequence is that the advantage of the better method appears a lot less when the sample is small, because of this noise in the estimates dominates a rather delicate signal. Hence I used 100,000 random points, so that the number of misclassified points for each method was a lot more stable. Empirically, this gave quite repeatable results. The uncertainty in the misclassification error of each of the two algorithms can be estimated separately by doing a moderate number of repeat runs (eg with 10000 each) and looking at the range of values found. You can then even combine the runs together and infer a good estimate of the uncertainty on the combined run (based on the variance of the estimate being inversely proportional to the number of samples). [could you give a link about the documentation you mentioned? I can't find a reference to "sweep" in the documentation I used at http://cran.rproject.org/web/packag...ab/kernlab.pdf and I don't quite see what this is doing from the R documentation of this function.] 
#16




Re: Q9, SVM vs PLA
Hello Christine:
I tried your code using the sweep function (which is totally mysterious to me and so like Elroch, I'd like to ask how you arrived at this function). I got the following error message (using r in spanish): Code:
Error en sweep(x[, 2:3], MARGIN = 1, y, "*") : subíndice fuera de los límites Ejecución interrumpida So I tried my version of the H matrix: Code:
H < kernelPol(vanilladot(),x,,y) Code:
Error en solve.default(AP, c(c.x, c.y)) : sistema es computacionalmente singular: número de condición recíproco = 1.92544e16 Calls: ipop > ipop > solve > solve.default Ejecución interrumpida Code:
u < matrix(rep(1e3,N)) Ahh... quadratic programming and its mysteries! That's why I gave up on ipop altogether and decided to use ksvm: Code:
x < as.matrix(training_data[,2:3]) #pull out x_0 y < as.matrix(training_data[,4]) svmmodel < ksvm(x,y,kernel="vanilladot",C=100,type="Csvc") 
#17




Re: Q9, SVM vs PLA
Hi guys,
Sorry for the confusion: 1. The X matrix is my code excerpt above includes x0 (I used the same matrix for PLA), so leave out the index subsetting if you are using a separate matrix for SVM. 2. sweep(XIn[,2:3], MARGIN=1, yIn, '*') is the same as apply(XIn[,2:3], 2, function(x) {x * yIn} ) 3. Here is the kernlab documentation I used: http://cran.rproject.org/web/packag...ab/kernlab.pdf 
#18




Re: Q9, SVM vs PLA
Thanks Catherine. Does that make sense about nature of the errors due to sample size?

Thread Tools  
Display Modes  

