Turing Equivalence
So rested he by the Tumtum tree...
Alan Mathison Turing was the brilliant, yet controversial Mancunian mathematician who, according to Sir Winston Churchill, contributed more than any one person to the Allies war efforts against Germany. His greatest contribution to the theory of computation was the addition of an infinite linear discrete memory (an endless tape of cells) to the mathematical models of automata that existed at that time (the 1930's). This endless tape could model any linear combination of disparate elements or actions- the storage symbols did not need to be describable by a grammar or by an taxonomy, which meant the tape could stand for virtually anything - internal system state memory or external world state, singular or plural, serial or parallel. By effectively inventing a computable (note the italics) addressing scheme for general storage subtypes (buffering, archiving, FIFO, stack queue etc), Turing expanded his machine to match literally any conceivable set of enumerable elements . In previous finite automata models, it was the paucity of internal configurations which limited the ability of a computational system to transform input symbols into output sentences, or accept/reject input patterns. Turing designed a machine that could not only emulate any mathematical function, but could simulate any conceivable problem solving procedure AT ALL.
By possessing a storage addressing scheme that was itself a part of the finite state computations (and hence, could both change them, and be changed by them), Turing Machines could perform any sequentially defined task. Later, Chomsky was to characterize the Turing Machine as being equivalent to the most powerful programming language subtype, called Type 0. TM's provided a way to combine sequential/iterative operations (the tape, its syntax) with hierarchical/recursive structures (the states, their semantics). Until the invention of the Turing Machine, mathematicians were conceptually limited in the type of problem solving technique, namely it had to be recursively defined. These techniques are analytic in form. New techniques must follow from old*. After the TM, the iterative (synthetic) nature of mathematicians real-world problem solving behaviors could now be emulated by machinery. More than that, the iterations of the machine itself could be used to perform trial and error searches through possible spaces. This meant that mathematicians could not only 'ask' the machine questions (what is, analysis) but also ask it to perform experiments (what if, synthesis).
The problem with the addition of extra power to any system or domain is matching the performance bonus with better governance (ie command and control). So it was with the TM. The problem with sending the tape head (the state register or repository) of the machine off on 'fishing expeditions' is the need to know if it will ever catch any fish. Indeed, we need to know if the machine will 'return' at all, stopping empty handed, having found no solution, or not actually stopping at all. This is the so-called 'halting' problem. Turing demonstrated that his machine is so powerful, the question of its halting cannot ever be resolved in the most general case. What is of much more use is knowing exactly what class of computations the UTM will handle (ie when doing those it will halt), and what ones it won't.
It is debatable whether Turing invented** 'software', that name we use for turning hardware-like mechanisms into data patterns that can be processed just like any other input data source. There is no doubt, however, that he was fully aware of the kinds of RASP (random access sequential processing) techniques which we have come to know as 'algorithms'. Figure 1 depicts how the 'unlimited' Turing Machine (the most universal type of FSA, a RASP) at a higher level of abstraction can be emulated by a combination of Moore machine (a synchronous, 'choice'*** c-machine, externally governed ROM) and Mealy machine (asynchronous, 'automatic' a-machine, internally governed ROM) at a lower, more concrete level. Figure 23(a) presents this idea, depicting what is really a complete theoretical overview and definition of all modern computers.
Figure 23(c) depicts the GOLEM. While informally equivalent to the TDE (which is a recursive TM), it is free of halting issues, because it is a cybernetic machine (a cybermaton, equivalent to Turing's choice governed c-machine) which is goal-driven, ie it contains an inner representation of its computational goal, as well as a measure of the remaining distance between its current state and its terminating (goal) state. Cybermata are in this sense 'transparent' , compared to Turing machines which are 'opaque' - their current state gives no hint of how close they are to halting.
*this interpretation of historical events is constructed from the author's own viewpoint and limited (non-mathematician) knowledge. The readers indulgence is politely requested.
**John Von Neumann claims to have invented the idea of the stored program without borrowing the idea from Turing
***Turing called automata a-machines, biological computers b-machines, and what we might call 'user interfaces, ie software governed by human choice, c-machines.
Having completed a brief summary of Turing's contribution to mathematical automation, we can now demonstrate that the TDE at level n is broadly comparable to a specialist Turing Machine, and the TDE multi-level fractal is similarly comparable to a generalist Universal Turing Machine (see figure 24).
The L-lobe is comparable to the TM's state table (the one 'contained' within its moving head), except it consists of a nested set of declarative goals, because the TDE is built upon cybernetic roots.
The TDE therefore represents a 'missing link' between computers (Turing Machine) and human and animal brains. Another way of saying this is that the TDE proves (informally) that brains do the same things as computers, and vice versa.