I understand how to generate

and the

data points in

. I also know how to create permutations of these data points and that in each "epoch" of the algorithm I will want to visit each of the N points in a (newly) permuted data set. From here things get a little fuzzier. Each epoch will require a set of N "executions" of the SGD algorithm, running through all the points of its permuted data set. In each of these executions I start with the weight vector

initialized to zeros. What do I do with the weight vectors resulting from SGD executions

*within* an epoch? Do I average them? How do I terminate the SGD algorithm in the execution? Do I let it run for a prescribed number of iterations or do I terminate when the gradient gets small?

Most of my confusion stems from interpreting:

*Run Logistic Regression with Stochastic Gradient Descent to find g and estimate Eout (the cross entropy error) by generating a sufficiently large separate set of points to evaluating the error. Repeat the experiment for 100 runs with different targets and take the average.*

The first part of it makes sense: run the experiment (multiple epochs) to get a final set of weights (

) then generate a large set of points on which to evaluate the error. The second part, "Repeat the experiment for 100 runs with different targets and take the average", I find confusing. What "experiment"? The whole multiple-epoch experiment? "With different targets" - does this mean different samples? Different permutations?

I fear I made a wrong turn in Albuquerque. Some clarification would be useful.

thanks