That's not it. Think of it this way:

A neural network

**model** is a layout (how many layers, how many nodes in each layer) along with the values of the weights (e.g.

,

, and so on for all the weights). The model describes a function, in that it takes an input, x, and produces an output, y.

During learning, the way the learning algorithm figures out what values to assign to the weights is by

**optimizing** the error measure, in other words finding the weights that minimize Ein(w). The method of optimization (the learning algorithm) described in class is backpropagation, which is pretty standard but won't always find the lowest possible Ein (because it can get stuck in local minima). You are free to use other optimization techniques in hopes of finding lower Ein, and one such optimization technique is genetic algorithms. The reason GA isn't a model is because it doesn't specify how to turn x into y. It's just a technique for optimizing a function, in this case it optimizes Ein(w) in terms of w.