AUC Metric
Can you comment on using the AUC metric for assessing the quality of a classifier?
Is this the best metric for assessing classifiers? What is the mathematical basis for AUC? Thanks! 
Re: AUC Metric
There is no such thing as "best", actually there is a jungle of validation metrics and curves out there which all have their merit.
What is often also used is the F1 score (+precisionrecallcurves) aside from AUC and ROC. it's problemdependent, ROC has the advantage/disadvantage of being invariant to class skew. The AUC can be directly computed using the Mann Whitney U statistic. hth 
Re: AUC Metric
Quote:
Mathematically, the AUC is also equivalent to measuring the pairwise ranking accuracy introduced from the (decision value of the) classifier. This paper http://www.icml2011.org/papers/567_icmlpaper.pdf is a pretty recent study on the connection between AUC and other metrics (such as the usual 0/1 error). Hope this helps. 
Re: AUC Metric
The link to the paper is not valid. Please fix it.

Re: AUC Metric
Quote:

All times are GMT 7. The time now is 02:03 AM. 
Powered by vBulletin® Version 3.8.3
Copyright ©2000  2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. AbuMostafa, Malik MagdonIsmail, and HsuanTien Lin, and participants in the Learning From Data MOOC by Yaser S. AbuMostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity.