So I wanted to do a little bit of experimenting with the perceptron algorithm to understand it better and I came up with a simple match making scenario. Essentially i have a 100x2 training data matrix with the first feature being height between 165 and 185 and the second feature is weight (as in physical weight of a person

) between 60 and 80. My target function is a very simple one :
Code:
function [res] = target(X)
m = size(X, 1);
res = zeros(100, 1);
for j = 1:length(X)
if(X(j,1) > 170 && X(j,2) > 65)
res(j) = 1;
else
res(j) = -1;
end
end
end
It returns 1 if a person is taller than 170cm and heavier than 65 kilos and -1 otherwise. I use this function to generate the result of a X matrix which consists of random values withing the aforementioned boundaries.
The next logical step is to learn a hypothesis function g(X) which would act much in the same way as the target function - which I assume is unknown. So here is my implementation of the perceptron:
Code:
function [w] = perceptron(X, y)
X = [ones(size(X, 1), 1) X]; % add the bias term
w = zeros(size(X, 2), 1); %init weights to zero
m = length(y);
iterations = 0;
wrong = 0;
while(true)
iterations = iterations + 1;
wrong = 0;
for j = 1:m
if (sign(X(j, :) * w) != y(j))
w = w + (y(j)*X(j, :))';
wrong = 1;
break;
endif
end %inner for
if(wrong == 0)
break;
endif
end %outer loop
end %function
Unfortunately it doesn't converge?


Are my assumptions towards the problem wrong or do I have a fault in my implementation?