![]() |
#1
|
|||
|
|||
![]()
Can you comment on using the AUC metric for assessing the quality of a classifier?
Is this the best metric for assessing classifiers? What is the mathematical basis for AUC? Thanks! |
#2
|
|||
|
|||
![]()
There is no such thing as "best", actually there is a jungle of validation metrics and curves out there which all have their merit.
What is often also used is the F1 score (+precision-recall-curves) aside from AUC and ROC. it's problem-dependent, ROC has the advantage/disadvantage of being invariant to class skew. The AUC can be directly computed using the Mann Whitney U statistic. hth |
#3
|
||||
|
||||
![]() Quote:
Mathematically, the AUC is also equivalent to measuring the pairwise ranking accuracy introduced from the (decision value of the) classifier. This paper http://www.icml-2011.org/papers/567_icmlpaper.pdf is a pretty recent study on the connection between AUC and other metrics (such as the usual 0/1 error). Hope this helps.
__________________
When one teaches, two learn. |
#4
|
|||
|
|||
![]()
The link to the paper is not valid. Please fix it.
|
#5
|
||||
|
||||
![]() |
![]() |
Tags |
auc metric, classifiers |
Thread Tools | |
Display Modes | |
|
|