![]() |
AUC Metric
Can you comment on using the AUC metric for assessing the quality of a classifier?
Is this the best metric for assessing classifiers? What is the mathematical basis for AUC? Thanks! |
Re: AUC Metric
There is no such thing as "best", actually there is a jungle of validation metrics and curves out there which all have their merit.
What is often also used is the F1 score (+precision-recall-curves) aside from AUC and ROC. it's problem-dependent, ROC has the advantage/disadvantage of being invariant to class skew. The AUC can be directly computed using the Mann Whitney U statistic. hth |
Re: AUC Metric
Quote:
Mathematically, the AUC is also equivalent to measuring the pairwise ranking accuracy introduced from the (decision value of the) classifier. This paper http://www.icml-2011.org/papers/567_icmlpaper.pdf is a pretty recent study on the connection between AUC and other metrics (such as the usual 0/1 error). Hope this helps. |
Re: AUC Metric
The link to the paper is not valid. Please fix it.
|
Re: AUC Metric
Quote:
|
All times are GMT -7. The time now is 08:33 AM. |
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.