Thread: *ANSWER* q8/9
View Single Post
  #4  
Old 05-07-2013, 06:03 PM
apbarraza apbarraza is offline
Junior Member
 
Join Date: Jan 2013
Posts: 4
Default Re: * answer * q8/9

I think I fixed the problem. I was confused and generated random points for each epoch and what was really required was a permutation over the original training points.
This is my fix:
Code:
function [N, theta, Eout] = trainLogisticRegression(eta, mfunc, bfunc, numPointsPerEpoch)
% Initialize some useful values
N=0;
theta = zeros(3, 1);
theta_last =  theta + 1;

%Generate training points
	[X, y]  = getRandomPoints(numPointsPerEpoch, mfunc, bfunc);
	% Add intercept term
	X = [ones(numPointsPerEpoch, 1) X];

while ((abs(theta-theta_last)>0.01)==1), %Iterate until convergence
	N = N +1;
	theta_last = theta;
	%Permutation of training points
	perm = randperm(numPointsPerEpoch);
	%Gradient Descent
	for i = 1:numPointsPerEpoch
		e = y(perm(i)).*X(perm(i), :)./(1+exp(y(perm(i))*(theta'*X(perm(i),:)')));
		%Adjusting parameters given gradient
		theta = theta + eta*e';
	end;
end

%New set of points to calculate the error
[X, y]  = getRandomPoints(numPointsPerEpoch, mfunc, bfunc);
% Add intercept term
X = [ones(numPointsPerEpoch, 1) X];

%Error measure
Eout1 = 1/numPointsPerEpoch*(sum(log( 1 + exp(-1*y.*(theta'*X')'))));
Eout2= 1/numPointsPerEpoch*sum((h(theta,X) - 1/2*(1+y)).^2);

Eout = [Eout1 Eout2];

end
I am getting an average N of 37.100.
Eout1 is given me an average of 0.3 which is not the required answer
Eout2 is given me an average of 0.09 which is close to the final answer

I´m wondering if there is something still wrong with what I´m doing...
Reply With Quote