From MFKP_wiki

Jump to: navigation, search


The lack of a priori distinctions between learning algorithms

David H. Wolpert

This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower expected OTS error than B as vice versa, for loss functions like zero-one loss. In particular, this is true if A is cross-validation and B is “anti-cross-validation” (choose the learning algorithm with largest cross-validation error). This paper ends with a discussion of the implications of these results for computational learning theory. It is shown that one cannot say: if empirical misclassification rate is low, the Vapnik-Chervonenkis dimension of your generalizer is small, and the training set is large, then with high probability your OTS error is small. Other implications for “membership queries” algorithms and “punting” algorithms are also discussed.


Neural Computation, Vol. 8, No. 7. (1 October 1996), pp. 1341-1390, https://doi.org/10.1162/neco.1996.8.7.1341 
Key: INRMM:4301665

Keywords

           

Article-Level Metrics (Altmetrics)
Digital Object Identifier


Available versions (may include free-access full text)

ACM, DOI, MIT Press Journals, Pubget, PubMed (Search)

Versions of the publication are also available in Google Scholar.
Google Scholar code: GScluster:10024661736513971495

Works citing this publication (including grey literature)

An updated list of who cited this publication is available in Google Scholar.
Google Scholar code: GScites:10024661736513971495

Further search for available versions

Search in ResearchGate (or try with a fuzzier search in ResearchGate)
Search in Mendeley (or try with a fuzzier search in Mendeley)

Publication metadata

Bibtex, RIS, RSS/XML feed, Json, Dublin Core
Metadata search: CrossRef DOI, DataCite DOI

Digital preservation of this INRMM-MiD record

Internet Archive

Meta-information Database (INRMM-MiD).
This database integrates a dedicated meta-information database in CiteULike (the CiteULike INRMM Group) with the meta-information available in Google Scholar, CrossRef and DataCite. The Altmetric database with Article-Level Metrics is also harvested. Part of the provided semantic content (machine-readable) is made even human-readable thanks to the DCMI Dublin Core viewer. Digital preservation of the meta-information indexed within the INRMM-MiD publication records is implemented thanks to the Internet Archive.
The library of INRMM related pubblications may be quickly accessed with the following links.
Search within the whole INRMM meta-information database:
Search only within the INRMM-MiD publication records:
Full-text and abstracts of the publications indexed by the INRMM meta-information database are copyrighted by the respective publishers/authors. They are subject to all applicable copyright protection. The conditions of use of each indexed publication is defined by its copyright owner. Please, be aware that the indexed meta-information entirely relies on voluntary work and constitutes a quite incomplete and not homogeneous work-in-progress.
INRMM-MiD was experimentally established by the Maieutike Research Initiative in 2008 and then improved with the help of several volunteers (with a major technical upgrade in 2011). This new integrated interface is operational since 2014.