I'm another newbie to QP solvers, using CVXOPT. Also new to NumPy, whose speed made running many large tests feasible.

These helped:

From a post by kkkkk from the first (Spring 2012) run of this course:

http://ascratchpad.blogspot.com/2010...erivation.html
http://www.mblondel.org/journal/2010...nes-in-python/
And:

http://courses.csail.mit.edu/6.867/w.../Qp-cvxopt.pdf
Ensure that double floats are used everywhere ( cvxopt.matrix(... tc='d')).

The docs state that these tolerances are the defaults (though the options dictionary is initially empty)

cvxopt.solvers.options['abstol'] = 1e-7 # Default?

cvxopt.solvers.options['reltol'] = 1e-6 # Default?

cvxopt.solvers.options['feastol'] = 1e-7 # Default?

Tightening things to

cvxopt.solvers.options['abstol'] = 1e-9 # <<<

cvxopt.solvers.options['reltol'] = 1e-8 # <<<

cvxopt.solvers.options['feastol'] = 1e-9 # <<<

reduced the number of extra SVs and reduced E_in from ~1% to 0 for the 100 sample experiment. Further tightening upset QP, causing a handfull of "Terminated (singular KKT matrix)" messages. Note: this was all thud and blunder experimenting, with no understanding of CVXOPT's internals.

The fiddle invented by elkka (see Spring postings) to improve Octave's QP made no significant difference to my CVXOPT QP runs, other than to reduce the number of extra SVs a little bit more. It also eliminated E_in before the above tolerance changes were made.

CVXOPT's iteration count never ran away (~7 .. ~24 for 10 .. 500 points).

HTH.