The Chemistry Analogy: Strategic Motivations for Large Scale ‘Cognitive Robotics’
By way of an analogy comparing the levels of description associated with chemistry and quantum mechanics, this short paper outlines motivations for attempting to construct cognitively sophisticated robots even before all behaviours of the components which might be used in such an endeavour have been fully analysed.
Research in biological cognition covers levels of description ranging from molecules to societies. While the interests of artificial life typically focus on the more finely grained end of this range, artificial intelligence typically concerns itself with more coarsely grained phenomena. Where researchers in artificial life count it a success when they demonstrate surprisingly sophisticated behaviour from a set of simple units interacting through simple rules, specialists in artificial intelligence aim to capture the characteristic features of a system’s behaviour by abstracting away from unnecessary detail at lower levels. (Generalising about the structure of fields as rich and diverse as artificial life and artificial intelligence can be dangerous, but nothing here rides on these generalisations; I will use them merely for setting the scene.)
Research in mobile robotics falls roughly into similar categories: either behaviours are programmed explicitly, with rule-based constraints, or behaviours are designed to emerge through the interaction and self organisation of a comparatively small number of simple units. Once again generalising wildly, robots built with an artificial intelligence slant can be very good at rapidly performing well defined tasks under well defined environmental conditions known in advance, but they cope poorly, if at all, with exceptions to those conditions. (One example is the industrial robot used to assemble automobiles.) Those built with an artificial life slant can be very good at adapting their control systems to environmental conditions which may not be known in advance, but their repertoires are limited to behaviours like phototaxis and wall following. (One example is the Khepera using various renditions of simple evolved nervous systems.)
To date, there has been little in between.
The analogy below begins with the artificial life style of robotics.
Suppose most scientists study only quantum mechanics. Each year, conferences feature quantum physicists presenting sophisticated papers about the kinds of quantum systems for which we have the right mathematical tools to complete a full analysis. These physicists take it as their job to provide that full analysis, and as a result they very understandably steer clear of systems for which the right tools are lacking. Occasionally some non-specialist pipes in and asks why they don’t work on bigger systems with more particles, but the physicists look at each other knowingly and reply, “just wait until you actually try to work with these things — then you’ll see just how difficult it is and realise you would have no hope of analysing those kinds of systems, nor of working out what went wrong when they don’t behave as you expect”.
Suppose then that a previously unknown rogue group of chemists crashes one of the quantum mechanical parties and boldly proclaims that they’ve tried mixing oil and water and found it doesn’t work — but if they add a little detergent, it works a lot better. The chemists admit that they have no idea whatsoever why it works, because (in a world of only quantum physicists and rogue chemists) quantum chemistry hasn’t been invented yet, and they understand that the tools just do not exist to analyse the interactions of their compounds in full quantum mechanical detail. Many of the quantum physicists are baffled by why anyone would want to try mixing oil and water, let alone adding detergent, particularly when they can’t even explain the quantum mechanics of water yet. As far as they are concerned, the chemists don’t really understand anything about their mixtures, because they haven’t done a full quantum mechanical analysis. But some other quantum physicists start to wonder…
The important question in the above analogy is this: are the chemists adding usefully to scientific knowledge? Would they have added usefully to scientific knowledge even if they had discovered only that oil and water don’t mix, and detergent was just a distant bubble on the horizon?
Today’s roboticists who take an artificial life approach are much like the quantum physicists. I’m not sure where AI-inspired roboticists would fit in — house-builders, perhaps. But where are the chemists?
We have now reached the stage of technology where the analogues of basic chemicals, however rudimentary, have been created, and many of their low-level properties are at least partially understood: we have neurobiological theories of pattern recognition and of motor control, of audition and proprioception, and of even higher level cognitive phenomena. Some of these theoretical constructs now exist in analogue VLSI silicon. But few of these real hardware components have so far been used in autonomous systems research except singly. This is unsurprising, because (among other reasons) combining them compromises our chances of analysing them fully. And analysing them fully is part of the whole aim, isn’t it?
Understanding and explanation come in many forms and at many levels of description. What is a ‘full analysis’ to the chemist is grossly inadequate for the quantum physicist, but I believe the scientific integrity of (classical) chemistry is in no way compromised just because it doesn’t incorporate quantum mechanics. Likewise, I believe a ‘cognitive robotics’ which sets itself the task of creating something interesting out of parts which may themselves be highly complex should in no way be seen as scientifically lacking. Indeed, I believe the time for such a strategy is overdue.
Of Test Tubes and Bunsen Burners
If robotics researchers are to try the chemistry approach, combining disparate components often designed for specific applications, many tools must be developed and others refined. Some which immediately leap to mind fall roughly into categories.
Evolutionary methods for:
- Rapidly and reliably evolving components without real world testing for each and every generation
- Efficiently evolving away from unhelpful structures rather than toward desired ones (‘blind evolution’)
- Cheating on co-evolution by allowing members of a population to interact virtually (rather than building dozens of real robots)
System architecture and methods for:
- Combining ontogenetic adaptation within a single robot with phylogenetic adaptation of its structure over generations of artificial evolution
- Reconciling disparate timing and power requirements for chips designed with different purposes in mind
- Eliminating the role of explicit calculation in structural adaptation; exploiting the inherent properties of the components to adapt themselves
Cognitive theory for:
- ‘Educated guessing’ at what cognitive architectures may yield interesting results
- Translation of theoretical cognitive architectures into real combinations of components
These are just a few of the most obvious; the list can no doubt be doubled with a few minutes of thought. My intention is just to sketch the general directions which might be explored if the notion of higher level robotic experimentation is to be taken seriously.
Finally, one other requirement which immediately suggests itself is the need on the part of researchers to be psychologically prepared for many negative results. Negative results are not, in and of themselves, a bad thing. The early alchemists had quite a few negative results, too, but they contributed greatly to scientific progress along the way. At the moment, the answer to most questions about ‘cognitive robotics’ is simple: we don’t know. It is likely that much of what we may come to know will also be simple: it doesn’t work. But it is also likely that not all such results will be negative. And that is the challenge. Chemistry owes much of its history to alchemy — and if I am right, the future of robotics will owe much of its history to ‘cognitive robotics’.
Who knows? We may even discover detergent.
The Future of Robotics
In my opinion, the brightest future for robotics lies between the artificial life and artificial intelligence extremes so crudely sketched above. Both artificial life and artificial intelligence face an ever mounting analytic burden while attempting to grow the size and sophistication of their systems. If the target is biological style cognition, I do not believe human beings are capable of either 1) building it from the ground up, one neuron at a time or 2) programming it at a high level, even with the most sophisticated of rule-based systems.
Somewhere in the middle, we stand a chance.
- Research Archive
- About the Research Archive
- Drafts and Unfinished Papers
- International Workshop on Robot Cognition
- Agenda (Draft) for IWRC ’98
- External Participants in the International Workshop on Robot Cognition
- Internal (BT) Participants in the International Workshop on Robot Cognition
- Invitation Letter to Robot Cognition Workshop Participants
- Isn’t This (Just) AI?
- Objections and Replies on Self Awareness
- Photographs From IWRC ’98 in Lavenham, England
- The Chemistry Analogy
- What is Self Awareness?
- Why Study Robots?
- Mind Out of Matter
- Research Bibliography
- Supplementary Bibliography from Mind Out of Matter
- Tutorials and Introductions
This article was originally published by Dr Greg Mulhauser on .on and was last reviewed or updated by