Does the Group Invariance Theorem for all linear threshold functions?
Late in the 60's Papert and Minsky wrote a very famous book called Perceptrons, in which they proved the Group Invariance Theorem, which shows that if the pointset is closed over the action of a mathematical group, then the output of a linear threshold classification with a weight vector learned from the perceptron algorithm will be invariant under the same group action if and only if the weights can be chosen to preserve the group.
This was historically devastating because it meant that you couldn't do things like learn to recognize whether there is an odd number of pixels turned on in an image, unless one of your features depended on all the points in your pointset. So this is a limitation of the Perceptron learning algorithm (as opposed to, say, feature selection).
One way to get around this is to use neural networks, which are capable of doing these sorts of things.
My question is this: is it known whether something similar hold for SVMs or logistic regression? Is this a limitation of any possible way to learn a linear threshold function, or can we get around it in some clever way?
I apologize if this is covered later in the couse; I haven't seen all the videos yet.
__________________
Every time you test someone, you change what they know.
