![]() |
#1
|
|||
|
|||
![]()
In the course, one interesting thing we looked at was using a tool designed for linear regression to attack classification problems. It proved of some use but less than ideal, for reasons which included the inappropriate cost function (and the inherent awkwardness of approximating a discontinuous step-like function with a linear function). In hindsight this was highly relevant to my own experiments with doing exactly this in the past, if only to see why it was not that great an idea!
But there is a very precise mapping between any regression problem and a classification problem. This mapping is made by including an extra dimension for the domain of the function, and considering the function we are trying to approximate as the boundary of the classification problem. With this view, every data point we have can be considered to provide an infinite number of data points for classification. If ![]() ![]() ![]() ![]() Looking in the other direction, there is a very interesting observation (at least, to me). Suppose we know the value of ![]() ![]() ![]() ![]() ![]() ![]() Anyhow, what I am interested in here is the use of any classification methodology to attack regression problems by this mapping. I am not sure how this relates to, say, the use of SVMs for regression. I imagine it might be efficient to turn each data point for a regression problem into two points for classification, by adding and subtracting a small quantity and considering them to be classified above and below the function surface, but it would likely be useful to have other points to get a better relationship between the error functions. With the above idea in mind, one possibility is to space the points for classification in geometric progression (getting more widely spaced away from the function/boundary), so that classification errors are directly related to the number of bits of accuracy of the resulting regression function! Other spacings of the points can be chosen to correspond to any pointwise error function. Can anyone clarify how these ideas relate to the current state of knowledge and technology? |
#2
|
||||
|
||||
![]() Quote:
There are works that try to systematically connection regression to classification. The one you see in SVM regression more or less follows from the idea of "loss symmetrization" (You can google for some related work back ten years ago). For bounded-range regression, there are works like John Langford and Bianca Zadrozny Estimating Class Membership Probabilities Using Classifier Learners AISTAT 2005 based on using classifiers to decide suitable "thresholds" within the range. In works that reduce regression to classification, another key issue is usually about whether the reduced problems are "easy enough" to be solved well by classifiers. For instance, classifying every bit of the real-valued target separately may be challenging for classifiers, because you'd essentially need a high-frequency function (i.e. complex classifier) for the low-order bits. Hope this helps.
__________________
When one teaches, two learn. |
#3
|
|||
|
|||
![]()
Thanks, Lin (is your last name your given name, in the Chinese style?).
Following the principle that a picture is worth a thousand words, I thought I would post a couple instead of 2000 words. Here is the classification error equivalent to mean square error regression (with a fairly crude quantisation to make it less painful to look at) ![]() and here is the classification error emulation of the wacky but natural "bit error regression" (where the error function is proportional to the complement of the number of correct leading bits in the values). ![]() The above option may be entirely useless (although it can be dangerous to guess that), but a less crazy looking example is L1 regression error: ![]() |
![]() |
Thread Tools | |
Display Modes | |
|
|