Why Robots? And Why Self Awareness?

This short paper offers one view on the two questions of 1) why anyone interested in improving computer hardware or software technology should be interested in robots and 2) why anyone should care about engineering self awareness.

Focusing on Biological Style Cognition

The answer to the question “why robots?” depends on one’s view of biological style cognition as a role model for computational systems. I suggest that biological cognition offers the very best examples of how machines could be better than they are today.

The very best computational hardware and software produced by human beings over the last half century pales in comparison to the products of Nature’s 3.8 billion year R&D project known as evolution by natural selection. Biologists are fond of saying this is because Nature is cleverer than you are. But there is another reason.

The problem domains which human beings have thus far sought to address with artificial computing devices simply do not require the sophisticated cognitive architectures found in Nature’s creations. The world’s most powerful supercomputers and the most astounding AI programs lack the cognitive capacity of a six week old puppy. But that is because puppies have been forced to evolve powerful mechanisms to adapt their behaviour in the face of an information flow from the real world of literally gigabytes per second. They require cognitive sophistication to compress that information, extract from it what is relevant to their survival, and adapt their behaviour in subtle (or not so subtle!) ways to improve their chances of sending more puppies into the future to carry their genes.

The problem domains to which human beings apply computing devices require nothing of this, and there is no sense in which today’s computers have any sort of self interest which requires that they adapt their behaviour in the face of gigabytes of information flow. This accounts partly for the often spectacular success of classical AI in applying high-level algorithmic procedures with software running on digital computers. Without any need for awareness, self-motivated environmental exploration, or behavioural adaptation conditionally linked to gigabytes of information flow, the architecturally high-level algorithms of classical AI perform perfectly well in the more typical problem domains. These are generally characterised by (among other things) cleanly delimited boundaries typically known in advance, a predictable degree of contamination by environmental noise, and, by biological standards, comparatively tiny input size.

However, the future will be built not by those whose command of the traditional problem domains is marginally better than that of their competitors, but by those who have developed the tools to address bigger problems with complexities on a biological scale. A groundswell of opinion in cognitive science and artificial life is now moving toward the notion that the way to build systems capable of scaling to address these bigger problems — and the best way to apply ideas creatively appropriated from Nature’s 3.8 billion year R&D project — is to focus on complete embodied cognitive architectures. In other words, harnessing the power of Nature’s secrets of sophisticated biological cognition requires building complete cognizers much like Nature has: a system complete with sensory apparatus, motor output, and the capacity and need to direct its own interactions with its environment. Today’s computers do not need — nor can they take advantage of — biological style cognition. That is why taking fullest advantage of the lessons we can learn from Nature requires real robots.

The Importance of Having a Self

The easy answer to the question of why we might be interested in engineering a machine specifically to be self aware is that, to our knowledge, all examples of Nature’s most cognitively clever organisms are self aware. If we’d like to create cognitive systems as capable as Nature has, then self awareness ought to be on our list.

But this isn’t particularly convincing reasoning, because it says nothing about whether self awareness is an important feature of those clever natural organisms. (We might similarly reason that because so many fast cars are red, then if we want to build a fast car we ought to make it red. But surely its redness isn’t important to a fast car’s ‘fastness’.) So a more subtle answer is that, as I hinted in the section above, a cognitive architecture which is self aware is an especially efficient solution to the problems faced by an organism which must get on in a real and often hostile and unforgiving physical world. In particular, awareness of an organism’s own self — as distinct from other organisms and from its own environment — underwrites the capacity to model the organism under counterfactual conditions. In other words, an organism with the capacity for self awareness may also enjoy the capacity to model what its state and environmental relationships would be if conditions were other than they actually are. It is just this sort of creature, able to ‘test out’ non-real alternatives, which, as Popper famously put it, may allow its hypotheses to die in its stead.

Further thoughts on architectures which may be responsible for self awareness, the selective advantage of such a property, and the relationship of self awareness to phenomenal consciousness appear in my book Mind Out of Matter. Also see the introduction to self models in the short paper ‘What is Self Awareness?’.

This article was originally published by on and was last reviewed or updated by Dr Greg Mulhauser on .

Mulhauser Consulting, Ltd.: Registered in England, no. 4455464. Mulhauser Consulting does not provide investment advice. No warranty or representation, either expressed or implied, is given with respect to the accuracy, completeness, or suitability for purpose of any view or statement expressed on this site.

Copyright © 1999-2023. All Rights Reserved.