- 716 Want to read
- ·
- 51 Currently reading

Published **2000**
by Oregon State University, Dept. of Computer Science in [Corvallis, OR .

Written in English

Many machine learning applications require classifiers that minimize an asymmetric cost function rather than the misclassification rate, and several recent papers have addressed this problem. However, these papers have either applied no statistical testing or have applied statistical methods that are not appropriate for the cost-sensitive setting. Without good statistical methods, it is difficult to tell whether these new cost-sensitive methods are better than existing methods that ignore costs, and it is also difficult to tell whether one cost-sensitive method is better than another. To rectify this problem, this paper presents two statistical methods for the cost-sensitive setting. The first constructs a confidence interval for the expected cost of a single classifier. The second constructs a confidence interval for the expected difference in costs of two classifiers. In both cases, the basic idea is to separate the problem of estimating the probabilities of each cell in the confusion matrix (which is independent of the cost matrix) from the problem of computing the expected cost. We show experimentally that these bootstrap tests work better than applying standard Z tests based on the normal distribution.

**Edition Notes**

Statement | Dragos D. Margineantu, Thomas G. Dietterich. |

Series | Technical report -- 00-30-02., Technical report (Oregon State University. Dept. of Computer Science) -- 00-30-02. |

Contributions | Dietterich, Thomas Glen., Oregon State University. Dept. of Computer Science. |

The Physical Object | |
---|---|

Pagination | [8] leaves : |

ID Numbers | |

Open Library | OL16125951M |

Request PDF | Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers | Many machine learning applications require classifiers that minimize an asymmetric cost function rather than the. BibTeX @INPROCEEDINGS{Margineantu00bootstrapmethods, author = {Dragos D. Margineantu and Thomas G. Dietterich}, title = {Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers}, booktitle = {In Proc. 17th International Conf. on Machine Learning}, year = {}, pages = {}, publisher = {Morgan Kaufmann}}. Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers. and it is also difficult to tell whether one cost-sensitive method is better than another. To rectify this problem, this paper presents two statistical methods for the cost-sensitive setting. The first constructs a confidence interval for the expected cost of a single. Bootstrap methods for the cost-sensitive evaluation of classifiers. By. The second constructs a confidence interval for the expected difference in\ud costs of two classifiers. In both cases, the\ud basic idea is to separate the problem of estimating\ud the probabilities of each cell in the\ud confusion matrix (which is independent of the.

Bootstrap methods for the cost-sensitive evaluation of classifiers. In Proceedings of the Seventeenth International Conference on Machine Learning, pages . The bootstrap method used in this case is described by Saisana et al. () and Sin et al. () as follows. Estimation of parameters k c ′, k 1, k 2 and k 3 for the data set using the Levenberg-Marquardt Algorithm.. Synthetic data is generated by bootstrap sampling (random sampling with replacement) in order to get a fictional data set. Bootstrap methods for the cost-sensitive evaluation of classifiers. In Proceedings of the Seventeenth International Conference on Machine Learning, pp. – Morgan Kaufmann, San Mateo, CA, Bootstrap methods for the cost-sensitive evaluation of classifiers. In Machine Learning: Proceedings of the Seventeenth International Conference, pages –, San Francisco, CA, Morgan Kaufmann.

Margineantu, D.D., Dietterich, T.G.: Bootstrap Methods for the Cost-Sensitive Evaluation of Classifiers. Proceedings of the 17 th International Conference on Machine Learning, () – Google Scholar. Blake D., Caulfield T., Ioannidis C., Tonks ed inference in the evaluation of mutual fund performance using panel bootstrap methods J. Econometrics, (2) (), pp. Article Download PDF View Record in Scopus Google Scholar. What is the most appropriate sampling method to evaluate the performance of a classifier on a particular data set and compare it with other classifiers? Cross-validation seems to be standard practice, but I've read that methods such as bootstrap are a better choice. The bootstrap method is a resampling technique used to estimate statistics on a population by sampling a dataset with replacement. Hold-Out Classifier Evaluation - .

the dead and the gone

The Radio Boys First Wireless, or, Winning the Ferberton Prize (Dodo Press)- Current-feedthrough cancellation techniques in switched-current circuits
- main institutions of Jewish law
- Fundamentals of abstract algebra
- Herbert Putnams philosophy of librarianship

Windtower- Vindiciæ cultus evangelici, or, The perfection of Christs institutions, and ordinances, about his worship, asserted and vindicated from all ecclesiastical or humane inventions

The faithless favorite- Planning for responsible social growth and stability of purpose within a community of adult citizens

Rage

Densities of Halohydracarbons (Landolt-Brnstein Numerical Data and Functional Relationships in Science and Technology - New Series. Subvolume. J, 8)- Dipole moments in organic chemistry
- HIV/AIDS in Viet Nam
- Optical design for biomedical imaging
- Synopses of theses and case report in veterinary medicine at the Central Mindanao University, 1981-1984.
- Selected papers from the 9th International Display Workshops (IDW 02) and selected papers from the 22nd International Display Research Conference (EuroDisplay 02).

Earth & all stars- physical land classification of Northumberland, Durham and part of the North Riding of Yorkshire.
- Athena in the Odyssey