Quote:
Originally Posted by dbl001
Can you comment on using the AUC metric for assessing the quality of a classifier?
Is this the best metric for assessing classifiers?
What is the mathematical basis for AUC?
Thanks!
|
The AUC can roughly be described as measuring the overall trade-off between false-positive and false-negative in binary classification.
Mathematically, the AUC is also equivalent to measuring the pairwise ranking accuracy introduced from the (decision value of the) classifier.
This paper
http://www.icml-2011.org/papers/567_icmlpaper.pdf is a pretty recent study on the connection between AUC and other metrics (such as the usual 0/1 error).
Hope this helps.