# Biological Cognition & Universal Computation

This article addresses common misconceptions about the formal limits of human and other biological cognition.

## It’s Just Obvious, Isn’t It?

The basic trap is seductively simple:

- “In principle, I can count as high as I like: 1, 2, 3, … 31,536,000, …”
- “I could construct infinitely many well-formed English sentences, given enough time.”
- “Given any formal system, I can always step into a higher level system and prove meta-theorems about the first.”

Hardly anyone seems to like the idea that they might be a ‘mere’ finite automaton. Much more attractive is the fantasy that we’re capable of universal computation and are at least as powerful as Universal Turing Machines, if not more so. And the capabilities above, all of which require more power than that of a finite automaton, seem so obviously part of the human repertoire.

Philosophers like Penrose (1989) and Bringsjord (1992) have long argued that the mathematical and creative capabilities of human beings categorically exceed those of any formal system or machine operating by formal rules. Others, like Kripke (1980, 1982), maintain that the *meanings* of our concepts cannot even be captured by any formal system. Along these lines, Dummett (in an essay entitled ‘The Philosophical Significance of Godel’s Theorem’, reprinted if memory serves in Dummett 1978) suggests that our notion of ‘proof’ cannot be captured formally, because any formal account will always miss out some higher level meta-proofs which that particular formal account cannot accommodate.

All these theorists will probably readily admit that *of course* the real world stands in the way of the kinds of ‘in principle’ claims suggested above: I’ll die before counting to infinity, I’ll only construct a finite number of sentences in my lifetime, and some formal systems might just be so big that it would take more than my entire career to generate any meta-theorems about them. But what seems to matter to them is the notion that *in principle*, I could still do all those things, *if* I lived forever, etc. At first glance, this seems as harmless as the fact that *of course* no Universal Turing Machine which computes a function and halts actually uses an infinite amount of tape, because the point is that the fundamental architecture of the Universal Turing Machine guarantees universal power: it has the *power* to use infinitely much tape, provided that tape is available.

## Black Holes in the Universal Computation Ointment

But it turns out that any physical system which can be enclosed in a volume with a finite surface area can exist in only a finite number of distinguishable configurations. This fundamental result appears from research on the thermodynamics of black holes. (Why be concerned with black holes? Because as Bekenstein 1981a, p. 287 notes, “black holes have the maximum entropy for given mass and size which is allowed by quantum theory and general relativity”: thus, any bound on the maximum number of internal configurations for a black hole applies with equal force to every other physical system.) It would appear to follow straightforwardly from this that the computational power of any such physical system is bounded by that of finite automata.

In other words, when it comes to real entities in a real physical world (as distinct from the logical constructions we might imagine but which do not exist), it would appear that biological cognizers such as human beings cannot have universal computational power. *Of course* we can say, if we really feel the urge, that if only there were no such physical barriers, then human beings could be as powerful as Universal Turing Machines. But this is little different to saying that if only finite automata weren’t finite, then they, too, could be as powerful as Universal Turing Machines.

In the absence of some independent reason to think either that 1) the physics is wrong or 2) human beings are not implemented physically, I believe the above reasoning should force us to abandon the notion that we have anything beyond the power of finite automata. By ‘independent reason’, I mean something beyond the intuition that that’s just how it has to be or that’s just how it ‘obviously’ is. Sure, every time I’ve been behind the wheel of a car (on a level surface, with good traction, with it in gear, etc.), pressing the accelerator brought it about that the car went faster. But in the face of our knowledge of basic physics, this is by no means enough to make us believe that *every time* I press the accelerator, the car will go faster. Stepping on the accelerator only works until the car reaches its fundamental limits. Likewise, the fact that every time someone has asked me to add 1 to a given number, I’ve been able to do it provides no rationale for believing that I am capable of adding 1 to any given number whatsoever.

## So What?

Why should we care? What, if anything, is so bad about giving up the illusion that we’re so powerful?

For a start, philosophers concerned to argue that human-style cognition will never be engineered in artificial media such as silicon chips have lost an important ruse: they can no longer point to all the ‘obvious’ capabilities described above and then argue that because no finite machine could have those capabilities, we cannot be simulated by finite machines, and no finite machine could display our brand of cognition.

Beyond a few philosophers, I don’t think the conclusion needs to be read as at all depressing. First, note that the number of accessible states for systems of human temperature and size remains astronomically vast and continuously changing. It is not as if we are ever actually going to discover ourselves to be stuck in a limit cycle! Indeed, because we are open systems, continuously under modification, we do not even remain the *same* finite machines over time. Yes, I am a finite machine, but not the same one as I was yesterday. Second, given that the notion of universal computation is defined over an infinite space of functions, it would appear that no finite series of measurements could ever reject the hypothesis that a real physical system was only computationally equivalent to a finite automaton as opposed to some variety of universal computer. In other words, the hypothesis that a real physical system such as a human cognizer has universal computational power does not appear to be empirically meaningful. Only *theoretical* considerations can settle the issue, and as a result neither conclusion has any great relevance to what it means to be a human being.

## Sections Available

- Research Archive
- About the Research Archive
- Drafts and Unfinished Papers
- International Workshop on Robot Cognition
- Mind Out of Matter
- Research Bibliography
- Supplementary Bibliography from Mind Out of Matter
- Tutorials and Introductions
- Biological Cognition & Universal Computation
- Chaos Theory
- Complexity & Information Theory
- Computability and Completeness
- Emergence and Levels of Description
- Models of Computation: Turing Machines and Finite Automata
- Philosophers’ Zombies and their Role in Cognitive Science
- Qualia
- Quantum Decoherence
- Recursion Theory
- Soul Searching (TV Documentary)
- Supervenience

This article was originally published by Dr Greg Mulhauser on .

on and was last reviewed or updated by