Bootstrap methods for the cost-sensitive evaluation of classifiers
Read Online
Share

Bootstrap methods for the cost-sensitive evaluation of classifiers

  • 716 Want to read
  • ·
  • 51 Currently reading

Published by Oregon State University, Dept. of Computer Science in [Corvallis, OR .
Written in English


Book details:

About the Edition

Many machine learning applications require classifiers that minimize an asymmetric cost function rather than the misclassification rate, and several recent papers have addressed this problem. However, these papers have either applied no statistical testing or have applied statistical methods that are not appropriate for the cost-sensitive setting. Without good statistical methods, it is difficult to tell whether these new cost-sensitive methods are better than existing methods that ignore costs, and it is also difficult to tell whether one cost-sensitive method is better than another. To rectify this problem, this paper presents two statistical methods for the cost-sensitive setting. The first constructs a confidence interval for the expected cost of a single classifier. The second constructs a confidence interval for the expected difference in costs of two classifiers. In both cases, the basic idea is to separate the problem of estimating the probabilities of each cell in the confusion matrix (which is independent of the cost matrix) from the problem of computing the expected cost. We show experimentally that these bootstrap tests work better than applying standard Z tests based on the normal distribution.

Edition Notes

StatementDragos D. Margineantu, Thomas G. Dietterich.
SeriesTechnical report -- 00-30-02., Technical report (Oregon State University. Dept. of Computer Science) -- 00-30-02.
ContributionsDietterich, Thomas Glen., Oregon State University. Dept. of Computer Science.
The Physical Object
Pagination[8] leaves :
ID Numbers
Open LibraryOL16125951M

Download Bootstrap methods for the cost-sensitive evaluation of classifiers

PDF EPUB FB2 MOBI RTF

Request PDF | Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers | Many machine learning applications require classifiers that minimize an asymmetric cost function rather than the. BibTeX @INPROCEEDINGS{Margineantu00bootstrapmethods, author = {Dragos D. Margineantu and Thomas G. Dietterich}, title = {Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers}, booktitle = {In Proc. 17th International Conf. on Machine Learning}, year = {}, pages = {}, publisher = {Morgan Kaufmann}}. Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers. and it is also difficult to tell whether one cost-sensitive method is better than another. To rectify this problem, this paper presents two statistical methods for the cost-sensitive setting. The first constructs a confidence interval for the expected cost of a single. Bootstrap methods for the cost-sensitive evaluation of classifiers. By. The second constructs a confidence interval for the expected difference in\ud costs of two classifiers. In both cases, the\ud basic idea is to separate the problem of estimating\ud the probabilities of each cell in the\ud confusion matrix (which is independent of the.

Bootstrap methods for the cost-sensitive evaluation of classifiers. In Proceedings of the Seventeenth International Conference on Machine Learning, pages . The bootstrap method used in this case is described by Saisana et al. () and Sin et al. () as follows. Estimation of parameters k c ′, k 1, k 2 and k 3 for the data set using the Levenberg-Marquardt Algorithm.. Synthetic data is generated by bootstrap sampling (random sampling with replacement) in order to get a fictional data set. Bootstrap methods for the cost-sensitive evaluation of classifiers. In Proceedings of the Seventeenth International Conference on Machine Learning, pp. – Morgan Kaufmann, San Mateo, CA,   Bootstrap methods for the cost-sensitive evaluation of classifiers. In Machine Learning: Proceedings of the Seventeenth International Conference, pages –, San Francisco, CA, Morgan Kaufmann.

  Margineantu, D.D., Dietterich, T.G.: Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers. Proceedings of the 17 th International Conference on Machine Learning, () – Google Scholar.   Blake D., Caulfield T., Ioannidis C., Tonks ed inference in the evaluation of mutual fund performance using panel bootstrap methods J. Econometrics, (2) (), pp. Article Download PDF View Record in Scopus Google Scholar. What is the most appropriate sampling method to evaluate the performance of a classifier on a particular data set and compare it with other classifiers? Cross-validation seems to be standard practice, but I've read that methods such as bootstrap are a better choice.   The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. Hold-Out Classifier Evaluation - .