Some Objections and Replies on Self Awareness and Self Models

This article covers a few simple objections which might immediately leap to mind about the self model approach described in ‘What is Self Awareness?’.

Introduction

On a first look, it can be easy to read things in the self model approach which aren’t actually there and to miss things which are there but which are not stated explicitly. Below, I’ve listed a few of the simpler objections which some might immediately raise to the whole way of thinking of things. Borrowing the words of my colleague Tim van Gelder at the University of Melbourne, in his recent paper for Behavioral and Brain Sciences, all mix insight with confusion to yield plausible but misguided attacks.

Objections and Replies

Objection: Are you saying algorithmic information theory explains self awareness?
Reply: No. I am using algorithmic information theory as a tool to describe concisely a way of making precise some intuitive notions of self and self awareness.
Objection: I can think of many things with substantial mutual information content in which there’s no self awareness involved.
Reply: Suppose I were using differential calculus to describe how an aeroplane flies. One might object that one could think of many things related by differential equations in which there’s no flying involved. Obviously one can think of many things with substantial mutual information content in which no self awareness is involved. The self model approach does not equate the existence of mutual information content with self awareness.
Objection: AI has long produced systems with internal states that bear information about the outside world, and sometimes that information is actively gathered and updated by the system. Either all those systems are self aware, too, or there is nothing really new in the self model aproach.
Reply: The inference from the first statement to the second depends critically upon mistakenly equating self models solely with bearing information about the outside world and actively gathering and updating that information. If two approaches each use the word ‘model’, this alone of course does not make them equivalent. It is good that AI has produced systems with internal states that bear information about the world, etc. — but this satisfies one and only one of the requirements for being a self model.
Objection: This sounds a lot like situation semantics…or cybernetics…or (fill in your favourite)…and that was tried many years ago, so there is nothing really new in the self model approach.
Reply: If you squint hard enough and ignore the fine details, recursive analysis looks a lot like Bayesian statistics, too. (After all, they both use words like ‘convergence’, don’t they?) But painting (tarring?) them with the same brush can only be accomplished by ignoring the structure of the respective theories. Superficially, many approaches in cognitive theory and artificial intelligence claim to use one or another variety of information theory in some way. Most of them also use addition and subtraction in some way, but that doesn’t mean they’re all the same.
Objection: AI tried using information theory years ago, so there is nothing really new in the self model approach.
Reply: By analogy: AI tried using LISP and Prolog years ago, so there is nothing really new in anything whatsoever that AI does today (using those two languages). Do you believe that? Nuff said.

Closing Note

In the very brief account of self models given in ‘What is Self Awareness?’ — which is, incidentally, a quick and easy summary of around one third of Mind Out of Matter — a great deal rides on notions like functional representation, functionally active, conditional coupling, and so on. Much of the subtlety of this whole way of looking at cognition depends upon the mathematical definitions underlying each of these, and reading the account for what it says means means people must leave at the door any baggage they might carry attaching to what they mean by words like ‘representation’, ‘functional system’, and so on. It is these subtleties which are largely responsible for differentiating the approach I have suggested from umpteen other theoretical frameworks which ‘sound’ similar if there’s a lot of background noise.

Very often, what everyone seems to think they mean by words like ‘representation’ turns out to be grossly incoherent under careful scrutiny. When pressed, what people think they mean turns out to be not quite what they wanted to mean, and a discussion built upon the former gets repeatedly remade as they chase about for a way of clarifying the latter. I have gone to a great deal of effort to expunge imprecise and muddy — but not uncommonly used — definitions and concepts from my account of cognition in general and the self model in particular, offering carefully constructed and precise replacements. If thinking for more than a minute or two about self models is to be worthwhile, it requires entertaining the replacements, if even only temporarily.

Finally, this, someone might fire off in a parting shot objection, is (just) philosophy, and not ‘cognitive robotics’. And so it is. But I strongly believe that the greatest impediment to scientific and technological progress is not lack of funding, or a computer that runs Windows — but confused thinking. Get the concepts right in the first place, and it will make the rest a good bit easier.

Further comments, intended specifically to differentiate the self model approach from traditional AI, appear in “Isn’t This (Just) AI?”.

This article was originally published by on and was last reviewed or updated by Dr Greg Mulhauser on .

Mulhauser Consulting, Ltd.: Registered in England, no. 4455464. Mulhauser Consulting does not provide investment advice. No warranty or representation, either expressed or implied, is given with respect to the accuracy, completeness, or suitability for purpose of any view or statement expressed on this site.

Copyright © 1999-2023. All Rights Reserved.