## Has artificial intelligence become alchemy?

Science, Vol. 360, No. 6388. (04 May 2018), pp. 478-478, https://doi.org/10.1126/science.360.6388.478

### Abstract

Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, has charged that machine learning algorithms, in which computers learn through trial and error, have become a form of "alchemy." Researchers, he says, do not know why some algorithms work and others don't, nor do they have rigorous criteria for choosing one AI architecture over another. Now, in a paper presented on 30 April at the International Conference on Learning Representations in Vancouver, Canada, Rahimi and his ...

## The lack of a priori distinctions between learning algorithms

Neural Computation, Vol. 8, No. 7. (1 October 1996), pp. 1341-1390, https://doi.org/10.1162/neco.1996.8.7.1341

### Abstract

This is the first of two papers that use off-training set (OTS) error to investigate the assumption-free relationship between learning algorithms. This first paper discusses the senses in which there are no a priori distinctions between learning algorithms. (The second paper discusses the senses in which there are such distinctions.) In this first paper it is shown, loosely speaking, that for any two algorithms A and B, there are “as many” targets (or priors over targets) for which A has lower ...

## Ecological relevance of performance criteria for species distribution models

Ecological Modelling, Vol. 221, No. 16. (10 August 2010), pp. 1995-2002, https://doi.org/10.1016/j.ecolmodel.2010.04.017

### Abstract

Species distribution models have often been developed based on ecological data. To develop reliable data-driven models, however, a sound model training and evaluation procedures are needed. A crucial step in these procedures is the assessment of the model performance, with as key component the applied performance criterion. Therefore, we reviewed seven performance criteria commonly applied in presence–absence modelling (the correctly classified instances, Kappa, sensitivity, specificity, the normalised mutual information statistic, the true skill statistic and the odds ratio) and analysed their ...

## Sparse Algorithms Are Not Stable: A No-Free-Lunch Theorem

Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 34, No. 1. (January 2012), pp. 187-193, https://doi.org/10.1109/tpami.2011.177

### Abstract

We consider two desired properties of learning algorithms: *sparsity* and *algorithmic stability*. Both properties are believed to lead to good generalization ability. We show that these two properties are fundamentally at odds with each other: a sparse algorithm cannot be stable and vice versa. Thus, one has to trade off sparsity and stability in designing a learning algorithm. In particular, our general result implies that $\ell_1$-regularized regression (Lasso) cannot be stable, while $\ell_2$-regularized regression is known to have strong stability properties ...

## Ensemble based systems in decision making

Circuits and Systems Magazine, IEEE, Vol. 6, No. 3. (2006), pp. 21-45, https://doi.org/10.1109/mcas.2006.1688199
Keywords:

### Abstract

In matters of great importance that have financial, medical, social, or other implications, we often seek a second opinion before making a decision, sometimes a third, and sometimes many more. In doing so, we weigh the individual opinions, and combine them through some thought process to reach a final decision that is presumably the most informed one. The process of consulting "several experts" before making a final decision is perhaps second nature to us; yet, the extensive benefits of such a ...

## There Is a Free Lunch for Hyper-Heuristics, Genetic Programming and Computer Scientists

In Genetic Programming, Vol. 5481 (2009), pp. 195-207, https://doi.org/10.1007/978-3-642-01181-8_17

### Abstract

In this paper we prove that in some practical situations, there is a free lunch for hyper-heuristics, i.e., for search algorithms that search the space of solvers, searchers, meta-heuristics and heuristics for problems. This has consequences for the use of genetic programming as a method to discover new search algorithms and, more generally, problem solvers. Furthermore, it has also rather important philosophical consequences in relation to the efforts of computer scientists to discover useful novel search algorithms. ...

## Remarks on a recent paper on the "no free lunch" theorems

Evolutionary Computation, IEEE Transactions on In Evolutionary Computation, IEEE Transactions on, Vol. 5, No. 3. (June 2001), pp. 295-296, https://doi.org/10.1109/4235.930318
Keywords:

### Abstract

This note discusses the recent paper "Some technical remarks on the proof of the no free lunch theorem" by Koppen (2000). In that paper, some technical issues related to the formal proof of the no free lunch (NFL) theorem for search were given by Wolpert and Macready (1995, 1997). The present authors explore the issues raised in that paper including the presentation of a simpler version of the NFL proof in accord with a suggestion made explicitly by Koppen (2000) and ...

## No free lunch theorems for optimization

IEEE Transactions on Evolutionary Computation, Vol. 1, No. 1. (06 April 1997), pp. 67-82, https://doi.org/10.1109/4235.585893
Keywords:

### Abstract

A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of “no free lunch” (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization ...