Should SVMs ALWAYS converge to the same solution given the same data?

I've been running a few tests on Q7 and I find that if I randomize the order of the data in both the training and test sets, I get different solutions/errors? I am now wondering whether I should be averaging over say a hundred runs and whether I need to go back to previous questions and do the same where necessary? Argh (if so)!? Please can somebody confirm? Maybe my "shuffling" code is incorrect but looks correct to me:

Code:

trainingData = ReadCaltechFile('features.train');
%randomize data:
[dummy,ix] = sort(rand(1,rows(trainingData)));
newData = trainingData(ix,:);
trainingData = newData;
y = double(trainingData(:, 1));
X = double(trainingData(:, 2:end));

...and similar for test data.