Quote:
Originally Posted by elkka
I currently struggle to understand how to use the build in crossvalidation capability. I don't understand yet what exactly I don't understand, but I definitely don't understand something.
Specifically, when using 1vs1 classification on the digit set, I get some result for E_in error, that is close to E_out error. But whatever my parameters, the crossvalidation accuracy on the problem gives me 99.8% accuracy, which is way higher than E_in or E_out. Any ideas?
Code:
cva = svmtrain(Y, X, 't 1 d 2 g 1 r 1 v 10 c 0.01' );

I had this situation also; cv error is always the same. I also am saving the models that I generate (using sum_save_model function in the python script) but in looking at them, don't understand the values that show for the data points. Maybe scaling data would help as kkkkk did.
I have now applied data scaling (my own version) and this did result in discrete cva measures for the various C values, although cva does seem high still