Thanks, Lin (is your last name your given name, in the Chinese style?).

Following the principle that a picture is worth a thousand words, I thought I would post a couple instead of 2000 words.

Here is the classification error equivalent to mean square error regression (with a fairly crude quantisation to make it less painful to look at)

and here is the classification error emulation of the wacky but natural "bit error regression" (where the error function is proportional to the complement of the number of correct leading bits in the values).

The above option may be entirely useless (although it can be dangerous to guess that), but a less crazy looking example is L1 regression error: