Throwing it All Away: An Information Theoretic Perspective on Analogue and Digital Computation

This 1997 draft describes what would appear to be an information theoretic advantage of analogue computation over digital. The central idea is that digital computation abstracts away from most physical properties of a computational substrate, thereby rendering the information content of the laws of physics inaccessible except when explicitly added back in.

Introduction

One central difference between digital and analogue computation is that while the former abstracts away from the physical properties of a physical substrate, the latter incorporates those properties directly into the computational framework. For the most part, it just does not matter whether a given algorithm for a digital computer (usually, a discrete time, discrete signal architecture) runs on fast chips or slow chips, big chips or little chips — or even vacuum tubes or something more exotic. Indeed, this is one of the most beautiful properties of digital computers! It is a good thing that we don’t have to replace all our software every time we buy a faster computer or change our existing computer’s physical parameters (such as by turning up the air conditioning).

Analogue models of computation, on the other hand, typically rely heavily on the physical properties of their substrate. An algorithm for an analogue computer (usually, a discrete or continuous time, continuous signal architecture) might not work in the same way on different machines, or even on the same machine under different physical conditions. Historically, this reliance on underlying physics has proved one of the most significant impediments to the practical use of engineered analogue computational systems (although, arguably, Nature has managed just fine putting analogue architectures into real evolved creatures).

However, in the face of this, I suggest that computational frameworks which abstract away from physics give up something important: the information content of that physics.

Information Theoretic Bounds on Computational Power

The field of algorithmic information theory offers a robust and mathematically precise way of quantifying hard bounds on the power of formal computational systems. The relevant result for present purposes is that no formal system with an information content of N bits can generate strings with greater information content than N + c, where c is a comparatively small constant. In other words, apart from the constant, one can never generate more information from a formal system than one puts in. (This is not to say that the formal system cannot generate more than this quantity of bits, it’s just that those bits will form a string which is compressible into a form less than N + c long.) This is one information theoretic analogue of Gödel’s incompleteness results.

Now consider the information content of the laws of physics, say I(p). Consider two formal systems f1 and f2, with respective information content I(f1) and I(f2). If f1 and f2 otherwise have identical information content, except that in addition, f2 includes information about the laws of physics (which we assume are significantly algorithmically independent from the rest of the formal system), then I(f1) = I(f2) – I(p), and the hard upper bound on the information content of strings which f1 may generate is lower than the analogous upper bound for f2.

Of course it may be that f1 and f2 do not do the same kinds of things; some of the information content of one or the other may be gratuitous, so to speak, in that the structure of the formal system does not allow it actually be used. But on the assumption that each allows its full information content actually to to be used, f2 is information theoretically more powerful.

It is in this sense that, ceteris paribus, a computational system which includes information about the laws of physics offers more inherent power (measured in information content of outputs) than one which does not. On the face of it, this suggests that — again, in terms of information content of outputs, and not necessarily speed or accuracy or anything else — a digital machine which abstracts away from physics has lost something very significant relative to an analogue machine which preserves physics.

Objections and the Logical Depth Catch

There are two immediate objections to the relevance of this conclusion:

  • Fine, but if we allow the computers access to environmental information, such as some noise or other information-bearing inputs, then f1 can make up for its inherent lack of information content by grabbing it from the environment.
  • Moreover, if we just manually give f1 some information specifically about physics (say, in the form of a program), then it’s back on par with f2.

With respect to the first, it certainly is true that the environment is a rich source of information — but if we consider the addition of information which is algorithmically independent of both f1 and f2, then the advantage of the latter remains, because algorithmically independent information adds to the information content of each in lock step. It is only by adding information which is not algorithmically independent for one but is for the other (i.e., which bears substantial mutual information content with the other) that we can erase the difference. The obvious example is to give them information about the laws of physics, which is no gain for f2 but is for f1.

The answer to the second follows on from this point: yes, we can give f1 some information about physics, and the two will now have equivalent information theoretic power. But there is one last hitch, if we’re interested in comparing specifically digital and analogue systems. Specifically, the laws of physics are highly compressed representations of the behaviour of physical bodies, and to our knowledge, extracting descriptions of physical bodies and their behaviour from the concise low level laws of physics is computationally very intensive. In other words, combining the laws of physics with information about a system’s initial conditions allows us to calculate the behaviour of that system, but the calculation is thought to be difficult in the general case. (This sense of difficulty may be formally quantified in the framework of logical depth.)

Thus, it remains that while giving a digital computer information about the laws of physics may raise the upper bound on the information content of its outputs, very many of those outputs with higher information content may be very challenging in terms of resource use. An analogue system, on the other hand, may be in a better position because it needn’t actually undertake explicit calculations before the information content of the laws of physics can be reflected in its outputs — it may ‘just happen’. In other words, because the information describes how the analogue system operates anyway, it is not information which must be explicitly represented in a calculation.

This is the information theoretic justification lingering behind the intuitive idea that it is easier, for instance, to play with a yo-yo than to calculate the trajectory of a yo-yo being played with, based on information about the laws of physics which govern its oscillations. It is also the justification lingering behind the idea that, when it comes to information about the behaviour of objects in the real physical world, particularly when normal descriptions are of high logical depth, analogue systems may have an advantage over digital ones: it is computationally cheaper just to operate in accordance with the laws of physics than to calculate the effects of the laws of physics being applied.

Nature vs. Computer Engineers

Following the above reasoning, it is interesting to note one thing which Nature has managed very well but we have not: Nature has, through the power of evolution by natural selection, created organisms which are adept at exploiting the inherent physics of their wetware substrates. Natural organisms have ‘learned’ how to apply the information content of their bodies in useful ways. In building artificial computational systems, we have failed grossly in this respect. Far from exploiting the information content of physics in our computational systems, we ignore almost all of it!

This article was originally published by on and was last reviewed or updated by Dr Greg Mulhauser on .

Mulhauser Consulting, Ltd.: Registered in England, no. 4455464. Mulhauser Consulting does not provide investment advice. No warranty or representation, either expressed or implied, is given with respect to the accuracy, completeness, or suitability for purpose of any view or statement expressed on this site.

Copyright © 1999-2023. All Rights Reserved.