# Research Bibliographies, Page 5

This bibliography is like a glimpse into the past, an artefact of my time with BT’s Complex Systems Laboratory. Originally intended for internal use within a research group and later made available to the public, the bibliography has not been actively maintained since 2000.

## Neural Networks

It seems strange to include this tiny little section here, since my previous work involved a great deal more attention to neural networks. Nonetheless, in keeping with the aim of recording notes on resources for ease of reference later, I’ll maintain this section like the others. There is also at least one relevant entry in the section on Chemical Computing (Other). For other items on neural networks, the bibliography for *Mind Out of Matter* is probably a better place to visit.

Bartfai, G. (19??) ‘An Adaptive Resonance Theory-based Neural Network Capable of Learning via Representational Redescription’, unknown. {A colleague gave me a paper copy of this article, and I haven’t yet tracked down the actual reference.}

This short paper reviews Grossberg’s adaptive resonance theory and the representational redescription framework of Karmiloff-Smith (1992) before introducing a modification to the ARTMAP architecture which permits it to learn relational input-output dependencies which cannot easily be extracted solely from the statistics of input and output data. The mechanism can be understood as effecting a process by which knowledge embedded *in* a network can be redescribed and made available *to* it for further learning. Unfortunately, the means whereby the critical ‘feature recombinators’ at the heart of the architecture can be discovered are not addressed. On the last page, the author acknowledges that the problem is, in general, intractable — but wonders if genetic algorithms might be applied to the search. So while the paper is genuinely interesting, it does smack of magic.

Maass, W. and C.M. Bishop, eds. (1999) *Pulsed Neural Networks*. London: MIT Press.

This is a super book spanning the range of current research in pulsed neural networks which, unlike the common computational model of networks shuffling continuous variables from one node to the next, rely on the timing of neuronal spikes to encode information and perform computations. The first half of the book contains several longer tutorial articles from the perspectives of neurobiology, hardware, and theory, while the second half comprises shorter articles on the latest research work. The book’s twenty-eight contributors cover everything from population dynamics, phase locking, synchronisation and temporal binding to subthreshold analogue VLSI and the role of noise.

Smith, L.S. and A. Hamilton, eds. (1998) *Neuromorphic Systems: Engineering Silicon from Neurobiology*. Singapore: World Scientific.

### Neuromorphic Systems

Deiss, S.R.; R.J. Douglas and A.M. Whatley (1999) ‘A Pulse-Coded Communications Infrastructure for Neuromorphic Systems’, in Maass & Bishop (1999), pp. 157-178. (available online)

Because CMOS VLSI technology limits the internal connectivity of neuromorphic chips (as well as their connectivity to the outside world), larger scale neuromorphic systems will likely have to be constructed from many chips working together. This article presents a communications protocol developed as part of the Silicon Cortex project which uses an asynchronous digital multiplexing technique dubbed ‘address event representation’ to allow several mixed analogue/digital neuromorphic chips to be incorporated into one system.

Indiveri, G.; J. Kramer and C. Koch (1996) ‘Neuromorphic Vision Chips: Intelligent Sensors for Industrial Applications’, in *Advanced Microsystems for Automotive Applications 1996*, pp. ??. (available online)

This article briefly surveys several neuromorphic circuits used for vision and motion detection and describes an automotive application developed at CalTech which employs such circuits for the computation of heading direction.

Indiveri, G. and P. Verschure (1997) ‘Autonomous vehicle guidance using analog VLSI neuromorphic sensors’, in *Proceedings of the International Conference on Artificial Neural Networks 1997*, pp. ??. (available online)

This Koala mobile robot from EPFL Lausanne uses a one-dimensional silicon retina to achieve reliable autonomous performance in an edge tracking task.

Mahowald, M. and R. Douglas (1991) ‘A silicon neuron’, *Nature* 354: 515-18.

This early rendition of a CMOS analogue VLSI ‘neuron’, simulating the electrophysiological characteristics of one variety of cortical neuron, is a classic in the field of neuromorphic design. While Misha Mahowald tragically passed away early in her prolific career after moving to EPFL from CalTech, co-author Rodney Douglas continues with similar work in this area.

Rasche, R.; R.J. Douglas and M. Mahowald (1998) ‘Characterization of a Silicon Pyramidal Neuron’, in Smith and Hamilton (1998), pp. ??. (available online)

Extending the extraordinary silicon neuron of Mahowald and Douglas (1991), the authors demonstrate a CMOS aVLSI silicon pyramidal cell which approximates the Hodgkin-Huxley model for active neuronal membrane conductances. Dynamical characteristics of the cell, such as spike frequency adaptation and interspike interval distributions, closely mimic those of real pyramidal cells.

## Novel Computation & Computation Theory

Also see the section on neuromorphic systems under neural networks.

### Chemical Computing (Other)

Adamatzky, A.; O. Holland, N. Rambidi, A. Winfield. (1999) ‘Wet Artificial Brains: Towards the Chemical Control of Robot Motion by Reaction-Diffusion and Excitable Media’, in Floreano, et al. (1999), pp. 304-13.

Reviews theoretical analysis and simulations of chemical controllers for object following, optimal path finding, and universal control; controllers exploit wave processes in reaction-diffusion systems (such as the BZ system) and 2D excitable lattices. Fascinating possibilities, especially the portion on universal computation in excitable media (rememeber the gates of Fredkin & Toffoli? no mention of Landauer…). Unresolved questions about the practical feasibility for real computing devices, though; authors’ principle interest appears to be in the direction of motor control for robots.

Hjemfelt, A.; E.D. Wienberger and J. Ross (1991) ‘Chemical implementation of neural networks and Turing machines’, *Proceedings of the National Academy of Sciences USA* 88: 10983-87.

Describes chemical implementation of a McCulloch-Pitts style neural network using reversible chemical reaction mechanisms. Interesting, but the architecture does not involve learning, the actual chemistry of connecting large numbers of cells is not addressed, and the reactions require 4 + 4N + M reactants, where N is the number of neurons and M the number of connections. Also, the networks are primarily ‘configure & compute’ style, where the initial conditions are set and the system then settles to an answer (although the authors do address the possibility of altering neuron states over time via an externally supplied modulatory signal).

### Computation Theory (General)

Budach, L., ed. (1979) *Fundamentals of Computation Theory: Proceedings of the Conference on Algebraic, Arithmetic, and Categorial Methods in Computation Theory*. Berlin: Akademie-Verlag.

Havel, I.M. and V. Koubek, eds. (1992) *Proceedings of the 17th International Symposium on Mathematical Foundations of Computer Science. Lecture Notes in Computer Science*, vol. 629. New York: Springer-Verlag.

Hemmerling, A. (1979a) ‘Systeme von Turing-Automaten und ZellularrÃ¤ume auf rahmbaren Pseudomustermengen’, *Journal of Information Processing and Cybernetics EIK* 15(1/2): 47-72.

Original reference introducing Hemmerling’s Systems of Turing Automata; discussed in English in Hemmerling (1979b).

Hemmerling, A. (1979b) ‘Concentration of Multidimensional Tape-Bounded Systems of Turing Automata and Cellular Spaces’, in Budach (1979), pp. 167-174.

First English treatment of Hemmerling’s ‘Systems of Turing Automata’, proving several properties of the machines.

Wiedermann, J. (1992) ‘Weak Parallel Machines: A New Class of Physically Feasible Parallel Machine Models’, in Havel & Koubek (1992), pp. 95-111.

Generalised the STA model of Hemmerling (1979b) to the case of parallel Turing Machines.

Worsch, T. (to appear-a) ‘On parallel Turing machines with multi-head control units’, to appear in *Parallel Computing*. (available online)

Treatment of the generalisation of cellular automata called PTMs (Parallel Turing Machines), with particular attention to the definition of space complexity and resulting conclusions about PTM complexity.

Worsch, T. (to appear-b) ‘Parallel Turing Machines With One-Head Control Units and Cellular Automata’, to appear in Theoretical Computer Science. (available online)

Very technical treatment of complexity classes for parallel Turing machines and, thus, for cellular automata. (As the author notes in concluding comments on p. 31, in the extremes parallel Turing machines degenerate either to ordinary sequential Turing machines with one head — no parallelism — or to cellular automata — full parallelism.)

### Configurable Hardware & Defect Tolerance

Heath, J.R.; P.J. Kuekes, G.S. Snider, R.S. Williams (1998) ‘A Defect-Tolerant Computer Architecture: Opportunities for Nanotechnology’, *Science* 280: 1716-21.

This excellent article describes a massively parallel computer designed at HP Labs and constructed from largely faulty FPGAs. Meriting particular attention in the article is the possible application of the same techniques used to route around defects in the architecture to the problem of routing around faulty components in a molecular-level computer. If we can perfect techniques for efficiently using a hardware substrate made of faulty FPGAs, the authors suggest, we may be able to apply those same techniques to make efficient use of a hardware substrate comprising faulty molecular level components. Permitting flaws in molecular computers might in turn allow them to be produced at much higher yields and lower cost than ordinary silicon semiconductors, which are required to be virtually perfect.

Ortega, C. and A. Tyrrell (1998) ‘Design of a Basic Cell to Construct Embryonic Arrays’, *IEE Proceedings on Computers and Digital Techniques* 145(3): 242-8. (May)

Introduces authors’ ’embryonics’ approach to fault-tolerant FPGA configuration; need to follow this up.

Ortega, C. and A. Tyrrell (1999) ‘Self-Repairing Multicellular Hardware: A Reliability Analysis’, in Floreano, et al. (1999), pp. 442-6.

Building on the ’embryonics’ approach to fault-tolerant FPGA configuration introduced in Ortega and Tyrell (1998), whereby fault detection and response is handled locally, at a cellular level, without the involvement of a centralised monitor, this article describes the statistical properties of two different reconfiguration strategies. A short article of this type should not be expected to answer all questions which might suggest themselves, but I couldn’t help wondering about how each individual cell’s own self-checking would itself be implemented in a fault-tolerant manner. (I can imagine several different possibilities, but it would be interesting to see what specifically the authors have in mind.)

### Protein Computing

Birge, R.R. (1994) ‘Protein-Based Three-Dimensional Memory’, *American Scientist* 82(4): 348-55.

Photocycle of bacterial protein bacteriorhodopsin allows fast parallel reading and writing of information with arrays of differently coloured lasers. Also see Birge (1995).

Birge, R.R. (1995) ‘Protein-Based Computers’, *Scientific American* 272(3): 66-71.

Describes work of author and others at Syracuse on bacteriorhodopsin-based 3D volumetric memories and ‘holographic’ associative memory films. Devices exploit the photocycle of the bacterial protein bacteriorhodopsin, which includes intermediate structures of varying stability and varying absorption spectra, allowing fast parallel reading and writing of information with arrays of differently coloured lasers. Also see Birge (1994).

### Quantum Computing

For applications of quantum techniques in cryptography, see section on Cryptography and Cryptanalysis.

Gershenfeld, N. and I.L. Chuang (1998) ‘Quantum Computing with Molecules’, *Scientific American *278(6): 66-71.

Very briefly summarises the basic technologies of quantum computing before recounting the authors’ work in creating the first chloroform-based quantum computer using nuclear magnetic resonance technology. This two-qubit computer successfully demonstrated Grover’s (1997) *O*(sqrt(*n*)) quantum search algorithm.

Grover, L.K. (1997) ‘Quantum Mechanics Helps in Searching for a Needle in a Haystack’, *Physical Review Letters* 79(2): 325-28.

Introduces an essentially quantum mechanical search algorithm for finding an item lost in a database. While classical algorithms do no better than *O*(*n*/2), Grover’s algorithm manages a reduction to *O*(sqrt(*n*)).

## Physics (Misc.)

Cao, H., et al. (1999) ‘Random Laser Action in Semiconductor Powder’, *Physical Review Letters *82(11): 2278-81.

Reports first experimental observation of random lasing with coherent feedback in semiconductor powder where, in particular, the particle size is smaller than the wavelength and thus a single particle cannot act as a resonator. Instead, the lasing is due to the existence of recurrent light scattering in closed loop paths in the powder, acting as ring cavities for the light. In a nutshell, laser pumping of a random semiconductor powder substrate causes random lasing of the substrate due to the chance existence of a comparatively few scattering paths in the powder which turn out to be closed, thus returning scattered photons to the site of their original scattering from the pumping beam in such a way that coherent photon emission can occur.

I originally followed up this article after an interesting suggestion that recently reported random lasing in biological cells on the surface of common vegetables might somehow be exploited for computational purposes; unfortunately, my tentative conclusion is that the essentially random nature of the laser emission would make practical configurable computing very difficult.

## Robotics

As in the case of the section on neural networks, it seems strange to include such a tiny section here, given the greater attention I focused on robotics during previous work (see, for instance, the information on the BT International Workshop on Robot Cognition). But, the main purpose of this bibliography is to keep track of newly encountered resources for later reference, so here goes… For other robotics references, please see the section on neuromorphic systems.

Triantafyllou, M.S. and G.S. Triantafyllou (1995) ‘An Efficient Swimming Machine’, *Scientific American* March 1995: 40-48.

Primarily about the remarkable hydrodynamics of fish propulsion rather than robotics *per se*, this article reports on discoveries which help to explain how fish can swim so efficiently and on initial versions of a 49-inch robotic tuna which capitalises on the basic principles the authors have uncovered. Explains how fish flap their tails to create counter-rotating vortices that produce an incredibly efficient propulsive jet and discusses how fish may also exploit other turbulence existing in the water around them to their own benefit — sometimes achieving efficiency which, as a result, exceeds 100%. I could not help but think of this article in terms of an analogy which places fish swimming in a relation to normal mechanical propeller drives which is akin to the relation between biological methods of computation and normal ‘brute force’ digital electronic computation. In each case, the biological solution exploits the physics of its environment to achieve phenomenal efficiency, while the manually engineered solution labours away with a great deal more energy at lower efficiency to achieve what is, for many applications, an inferior result.

## Sections Available

This article was originally published by Dr Greg Mulhauser on .

on and was last reviewed or updated by