Quote:
Originally Posted by yaser
This is true here because the circles are concentric. In general, it may not be possible to reduce the learning model to an equivalent onedimensional version.

There's something about this problem that seems to make a lot of us think "can this really be right?"
I think we agree it is. One way I thought of it was to observe that the hypothesis set can never separate any two points at the same radius. This means you only need to consider one representative point at each radius, without loss of generality.
This has an analogy to an idea in topology that when points share all the same neighbourhoods they are effectively like the same point, and a quotient space can be formed which merges all the unseparable points with each other. I feel there may be the potential for more connections between hypothesis sets and topology to be drawn, although there are major differences as well as similarities.