View Single Post
  #2  
Old 06-12-2013, 03:45 PM
htlin's Avatar
htlin htlin is offline
NTU
 
Join Date: Aug 2009
Location: Taipei, Taiwan
Posts: 601
Default Re: Regression and Classification Problems

Quote:
Originally Posted by Elroch View Post
In the course, one interesting thing we looked at was using a tool designed for linear regression to attack classification problems. It proved of some use but less than ideal, for reasons which included the inappropriate cost function (and the inherent awkwardness of approximating a discontinuous step-like function with a linear function). In hindsight this was highly relevant to my own experiments with doing exactly this in the past, if only to see why it was not that great an idea!

But there is a very precise mapping between any regression problem and a classification problem. This mapping is made by including an extra dimension for the domain of the function, and considering the function we are trying to approximate as the boundary of the classification problem.

With this view, every data point we have can be considered to provide an infinite number of data points for classification. If f(x) = y, we know f(x) \leq y+\delta and f(x) \geq y-\delta for any \delta \geq 0

Looking in the other direction, there is a very interesting observation (at least, to me). Suppose we know the value of f(x) must lie in some interval [a,b], then we require one bit of information about f(x) for each halving of this range. Explicitly, we pick the midpoint of the range, classify the point as above or below f(x) and that tells us what we want. We think of functions as infinitely precise but, in the real world, information is finite, and of finite accuracy, and this gives us an explicit relationship between the N bits associated with classification of N points and a finite precision {b-a}\over{2^N} of a function.

Anyhow, what I am interested in here is the use of any classification methodology to attack regression problems by this mapping. I am not sure how this relates to, say, the use of SVMs for regression. I imagine it might be efficient to turn each data point for a regression problem into two points for classification, by adding and subtracting a small quantity and considering them to be classified above and below the function surface, but it would likely be useful to have other points to get a better relationship between the error functions. With the above idea in mind, one possibility is to space the points for classification in geometric progression (getting more widely spaced away from the function/boundary), so that classification errors are directly related to the number of bits of accuracy of the resulting regression function! Other spacings of the points can be chosen to correspond to any pointwise error function.

Can anyone clarify how these ideas relate to the current state of knowledge and technology?
Problem 3.13 of the LFD book illustrates an idea similar to yours.

There are works that try to systematically connection regression to classification. The one you see in SVM regression more or less follows from the idea of "loss symmetrization" (You can google for some related work back ten years ago).

For bounded-range regression, there are works like

John Langford and Bianca Zadrozny Estimating Class Membership Probabilities Using Classifier Learners AISTAT 2005

based on using classifiers to decide suitable "thresholds" within the range.

In works that reduce regression to classification, another key issue is usually about whether the reduced problems are "easy enough" to be solved well by classifiers. For instance, classifying every bit of the real-valued target separately may be challenging for classifiers, because you'd essentially need a high-frequency function (i.e. complex classifier) for the low-order bits.

Hope this helps.
__________________
When one teaches, two learn.
Reply With Quote