Intelligence is a Bad Word
This 1997 paper defends the claim that typical pre-theoretic notions of intelligence do not map to a scientifically well defined quantity and cannot easily be sharpened to do so.
Although it seems on first glance that the proposition ‘Chris Winter is more intelligent than a duckbilled platypus’ could be unproblematically grounded in some empirically measurable quantity, the most immediately plausible narrowings of ‘intelligence’ provide the proposition scientifically firm footing only at the cost of excluding from consideration other particular renditions of the general form ‘Chris Winter is more intelligent than X’, such as ‘Chris Winter is more intelligent than a spiny anteater’. Chris Winter, the duckbilled platypus, and the spiny anteater each occupy their own ecological niches and their own orthogonal corners of a vast multidimensional space spanned by axes for each of the truly empirical parameters one might attempt to stuff into a candidate sharpening of the notion of intelligence. ‘Intelligence’ is a bad word, and attempts to locate the intelligence of different organisms within some variety of universal metric space are bad science. The claim that Chris Winter is more intelligent than a duckbilled platypus is not a scientific claim.
1. Preliminary note on multidimensional quantification
Suppose, for the sake of argument, something which I shall argue below is false:
Naive Assumption F-Given some set of parameters which one might take to contribute to intelligence, every organism may be assigned a well defined value for each parameter within that set.
In other words, if we have chosen some set of empirically meaningful parameters which we take to describe intelligence, then naive assumption F (NAF, for short) suggests that the intelligence of any organism may be represented by the vector formed by the ordered list of values which an organism displays for each of those parameters.
If NAF were to be borne out, one seemingly promising way to compare the intelligence of different organisms might be through the norm, or the length, of their respective ‘intelligence vectors’. As it happens, this is very close to the framework underlying standard IQ tests, according to which a scalar intelligence quotient is derived from a weighted sum of scores in several sub-tasks which together make up intelligence tests such as the WAIS-R.
2. Preliminary note on natural kinds
In the language of philosophers of science, natural kinds are often described as the sorts of entities which figure in scientific laws and explanations. In Plato’s words, natural kinds “carve the world at its joints”. For example, the set of all protons is a natural kind, and both charge and momentum are natural kinds. By contrast, the set of all things the same age as the BT Tower is not a natural kind: there are no scientific laws which apply to all and exactly those items the same age as the BT Tower. Of course, there needn’t be anything mysterious or nonphysical about items which are not examples of natural kinds; after all, members of the set of things the same age as the BT Tower could be identified (relatively) unproblematically with empirical tests. A non-natural kind simply isn’t the sort of thing we should expect to figure in a scientific description of the world.
Entities may fail to be natural kinds for at least two different reasons. First, a property or a class of things (such as those the same age as the BT Tower) may be well defined but lack any scientific use. Second, a property (such as fuzziness) or a class of things (such as those which are duck-shaped) may be too poorly or vaguely defined for scientific use.
3. The problem(s) with quantifying intelligence
It turns out that NAF is implausible, and the underlying factors responsible for its failure also mean that ‘intelligence’ fails to be a natural kind for the second reason given above. But (worse, perhaps), even clinging to NAF will not keep ‘intelligence’ afloat as a natural kind; it still sinks under the weight of reasons of the first kind described above. I address each possibility in turn.
3.1 Why is NAF naff?
First consider the reasons why NAF is a naive assumption. It suggests, among other things, that no matter what tests we might set them, Chris Winter, a duckbilled platypus, a spiny anteater, and a sea cucumber all merit quantitative and empirically verifiable scores for each.
But consider a task such as balancing a cheque book: surely one measure of Chris Winter’s intelligence is his ability to balance his cheque book, and to exclude such a test would only disadvantage Chris Winter in any across the board comparison. Yet not only can the duckbilled platypus not balance its cheque book, it doesn’t even make sense to ask whether it can balance its cheque book. Instead, why not ask about the platypus’s ability to select the most appropriate conditions for keeping an egg at the right temperature for incubation? To omit such a test would place the platypus at a disadvantage, but including it reveals Chris as an incompetent egg incubator. Yet, as with cheque book balancing for the platypus, egg temperature balancing for Chris is right out in left field. On this view, particular scores (even zero scores) for such ecologically irrelevant tasks are scarcely relevant or meaningful.
One tempting way to circumvent this roadblock is simply to restrict the range of tests we might set Chris Winter, the duckbilled platypus, the spiny anteater, and the sea cucumber such that no single test excludes any particular organism solely on the basis of its ecological niche (a sort of ecological nondiscrimination policy). So asking how well an organism balances its cheque book is right out, since while Chris may be able to do it, the duckbilled platypus cannot, and asking how well an organism is able to locate and retrieve ants isn’t allowed, because the sea cucumber can’t manage it. The trouble, however, is finding any common set of empirically verifiable parameters which would allow meaningful comparisons between organisms to be made. Except where organisms occupy precisely the same ecological niche, any narrowing of the parameter set to accommodate some given broad selection of organisms will apparently disadvantage others by denying them dimensions along which their abilities may shine.
An alternative extreme, of course, is to allocate each organism its own full set of parameters, such that the spiny anteater can show off by tricking ants, the sea cucumber can show off by…doing whatever sea cucumbers do best, and so on. The obvious difficulty, though, is that now the ‘intelligence vectors’ of each live in completely different spaces, with utterly no potential for normalisation.
The upshot of all this is that the only immediately apparent ways of attempting to secure NAF, by sharpening the notion of intelligence with reference to particular sets of measurable parameters, encounter one of at least two difficulties: either the set of parameters turns out to be empty, because no common dimensions for measurement can be found, or the set of parameters must shift according to which organism is under consideration. In the first case, ‘intelligence’ becomes meaningless, while in the second case, ‘intelligence’ fails to be a natural kind because it does not refer consistently to any particular set of properties which together may feature in a scientific explanation. On this view, claims about intelligence are not, broadly speaking, scientific claims.
3.2 Orthogonal subspaces
Suppose some clever BT researcher overcame the difficulties canvassed above and produced a universal intelligence test, such that every organism really could earn some meaningful score on every component of the test. Perhaps she has broken down cognitive ability into such fundamental blocks (asking not about cheque book balancing, for instance, but about any cognitive ability functionally equivalent to solving sets of coupled linear equations) that this universal intelligence test avoids all charges of ecological bigotry and successfully reflects the basic underlying cognitive abilities of Chris Winter and sea cucumber alike.
Quite apart from the daunting methodological hurdles which would confront any experimenter attempting to test underlying atomic abilities through measurement of behavioural responses which, in the case of any particular organism, nearly always reflect composite cognitive abilities-remember, we need real measurements, not divinations-at least one serious analytical difficulty raises its head. Namely, if such a test truly does measure essentially atomic building blocks of cognitive ability, then surely measurements of at least some of those building blocks must live on axes which are orthogonal in the multidimensional space defined by the test. In other words, at least some of these building blocks must be completely independent of each other. (Otherwise, they are not atomic building blocks.) Returning to the ‘intelligence vector’ picture, this means that some intelligence vectors may be orthogonal to others (i.e., that the projection of one against the other has zero ‘length’). Intelligence vectors for whole species might live in subspaces orthogonal to subspaces accommodating vectors for other species. And as far as normalisation is concerned, this is just as bad as the case above for vectors living in completely different subspaces in virtue of different sets of parameters. That is, vectors in orthogonal subspaces are incommensurable.
In this case, ‘intelligence’ might pick out a perfectly well defined notion (just as the set of objects the same age as the BT Tower is reasonably well defined), yet it remains altogether worthless for scientific explanation.
4. Discussion and alternatives
It is clear that to be scientifically valuable, the pre-theoretic notion of intelligence must be sharpened by grounding it in some set of empirically meaningful properties or capacities. Yet, that project is immediately beset by two related difficulties. First, properties or capacities with great relevance for one species may be ecologically meaningless for another. Circumventing this problem by conducting distinct measurements for different organisms renders comparisons between them impossible. Second, addressing the problem by measuring some set of fundamental underlying cognitive abilities common to all organisms allows that intelligence measures of different organisms may live in orthogonal subspaces, which is just as bad. Either way, even relative comparisons such as ‘Chris Winter is more intelligent than a duckbilled platypus’ lose any scientific credibility they might have enjoyed on first glance.
The fundamental difficulty underlying attempts to justify a broad, general purpose measure of intelligence is that (as Gerald Edelman observes), unlike theoretical constructions like Turing Machines, there simply are no ‘general purpose’ animals. There are only application-specific, integrated creatures, and each has evolved to meet the demands of a particular ecological niche. What counts as intelligent behaviour in one niche may look like brash stupidity in another; likewise, deep, contemplative behaviour may be wasteful and hazardous in an environment where simple reactive measures will do.
A worthwhile alternative, then, to the problematic notion of a broadly applicable scientific measure of intelligence seems to be some niche-relative measure of how successfully an organism or other system copes with its particular environment. Ideally, such a measure would reflect differences between the cognitive abilities of conspecifics (or perhaps only differences between conspecifics occupying similar corners of the species-typical niche, such as BT managers, space shuttle pilots, or chameleon breeders), while refraining from adjudicating across species or niche boundaries.
A short companion paper, ‘Information Theoretically Attractive General Measures of Processing Capability-And Why They are Undecidable’, addresses formal decidability limitations of some methods which might seem promising for quantifying a system’s overall ‘power’ independently of niche-specific success considerations. A second short paper, ‘Scientist or Engineer: Who Cares About Intelligence Measures?’, compares (speculates on?) scientific and engineering approaches to the development of a cognitively advanced artificial system and concludes that the general intelligence measure is an engineer’s desire masquerading in scientists’ clothing.
Sections Available
This article was originally published by Dr Greg Mulhauser on .
on and was last reviewed or updated by