View Single Post
  #1  
Old 05-27-2013, 12:34 AM
Ziad Hatahet Ziad Hatahet is offline
Member
 
Join Date: Apr 2013
Location: San Francisco, CA
Posts: 23
Default *ANSWER* Question 9

I wanted to confirm my answer with what others are getting.

I coded up the solution in MATLAB. I seem to have gotten somewhat lucky when I answered [d] for this question, since the ratio from one of the runs was 0.61, which is closer to 70% than 50%.

However, I was re-running the code after submission, and it seems that the result fluctuates around 0.6, sometimes dropping below that.

My code follows. I would appreciate if anyone has insights as to where I am going wrong. It is most likely in the % Quadratic Programming % section, since PLA is lifted straight from the previous assignments I coded.

Code:
clear all
clc

N = 100;
Ntest = 1000;
TRIALS = 1000;
better = 0;
sv_count = 0;

for ii = 1:TRIALS
  % Generate random line
  p1 = 2*rand(2, 1) - 1;
  p2 = 2*rand(2, 1) - 1;
  % Possibility of divide by 0. Better to just use w0 + x1*w1 + x2*w2 form.
  slope = (p2(2) - p1(2)) / (p2(1) - p1(1));
  intercept = p1(2) - slope*p1(1);
  line = [slope intercept];

  % Generate random points
  % points are (x, y) coords
  points = 2*rand(N, 2) - 1;
  y = sign(points(:, 2) - polyval(line, points(:, 1)));
  X = [ones(size(points, 1), 1) points]; % Add extra dimension

  % PLA
  wpla = zeros(3, 1);
  while true
    % Classify
    yfit = sign(X*wpla);
    combined = [X y];
    bad = combined(yfit ~= y, :);

    if size(bad, 1) == 0
      break;
    end

    i = randi(size(bad, 1), 1);
    wpla = wpla + bad(i, 4)*bad(i, [1:3])';
  end

  % Quadratic programming
  X_svm = X(:, [2:3]); % Discard the constant term
  H = (y*y').*(X_svm*X_svm');
  f = ones(N, 1)*-1;
  A = y';
  b = 0;
  lb = zeros(N, 1);
  ub = 1e6*ones(N, 1);
  options = optimset('Algorithm','interior-point-convex','MaxIter',1500, 'Display', 'off');

  alpha = quadprog(H, f, [], [], A, b, lb, ub, [], options);
  wquad = zeros(3, 1);
  wquad(2) = sum(X_svm(:, 1).*(alpha.*y));
  wquad(3) = sum(X_svm(:, 2).*(alpha.*y));

  % Get index of maximum alpha
  [maxval, maxind] = max(alpha);
  wquad(1) = 1/y(maxind) - X_svm(maxind, :)*wquad(2:3);

  % Eout
  points = 2*rand(Ntest, 2) - 1;
  X = [ones(size(points, 1), 1) points];
  y = 2*[points(:, 2) > polyval(line, points(:, 1))] - 1;

  e_pla = sum(sign(X*wpla) ~= y)/size(X, 1);
  e_svm = sum(sign(X*wquad) ~= y)/size(X, 1);
  better = better + (e_svm < e_pla);

  % SVM
  sv_count = sv_count + sum(alpha > 0.001); % Floating point tolerance
end

(better/TRIALS) * 100
sv_count/TRIALS

Last edited by Ziad Hatahet; 05-27-2013 at 02:27 AM. Reason: Pasted code incorrectly.
Reply With Quote