Axiomata

And stood awhile in thought.....

Axioms are a priori (feedforward) truths
When we find enough evidence to confirm a hypothesis, that hypothesis becomes a theory or a posteriori truth.  Not every fact is in doubt- axioms are situational rules which do not need to be proved, but are assumed self-evident. They are a priori truths, taken as true without needing to be formally proved. Until hypotheses are proven, and in order to prove them, we accord them axiomatic status - we proceed as if we believe that they are true*. If we didn't we couldn't use them to construct realistic test models. 

Axiomata are used when the system is the hypothesis
When, however, the hypothesis is not one, but many inter-related hypotheses, and the probitive goal is much bigger than a simple experimental significance test, such as when it represents the structure and functions of a complete synthetic system, like the TDE/GOLEM, then this hypothetical system/machine becomes  worthy of its own special designation- an 'axiomaton'.

'Organisation' and 'Operation' describe the structural and procedural aspects of complex systems. 
The reason to introduce this particular abstraction is to make the implicit explicit.  For example, even before the TDE is invoked, its generic feedforward-feedback functional pattern is axiomatically true. This is shown in figure 25. The part of the TDE which generates saccades, the feedforward part,  is in the most generic case the part that creates the system's structure. It appears temporally prior to the other parts for obvious reasons - to use (operate) a system, it must first be built (organised).  The categories 'organisation' and 'operation' are roughly comparable to the terms 'form' and 'function'. 

Governance is recursively decomposed into feedforward command and feedback control. 
The feedforward block of the axiomaton in figure 25, labelled 'organisation', is the part that is analogous to the code or procedural part of a computer system - Turing calls this the 'a-machine', where the 'a' stands for 'automatic'.  The feedback block, labelled 'operation' is the part that is analogous to the user interface or declarative part of a computer system - Turing calls this the 'c-machine', where the 'c' stands for 'choice'. In TDE theory, the 'c' also (and rather conveniently) denotes 'cybernetic'. Conventional mathematical predictive (imperative) modelling is done by the organisational part of computational systems in off-line mode, while cybernetic governance** is done by the operational part of computational systems in on-line mode.  Note that cybernetic governance, which is a c-machine, and is on-line, comprises feedforward command and feedback control.

A simple automotive example will illustrate these rather subtle distinctions. Before starting to travel, you sit in front of a map and plan your journey. You are engaged in off-line modelling, of a predictive nature. Then when you start to (for example) drive yourself, you are engaged in on-line (real-time) governance of your motor vehicle, involving feedforward commands (use of steering wheel, accelerator, gears) to direct vehicle along the planned route, and feedback controls (use of windshield, mirrors, steering wheel, brakes) to check that the commands (necessary causes) are followed (desired effects). Figure 25(b) depicts the nested nature of such teleological (purposeful, goal-directed) axiomata. The NAVIGATION plans are the off-line feedforward element. They have a much larger lead time than the DRIVING commands, which lead the vehicle motions, albeit with much smaller latency. 

Figure 25 is a clear illustration of the value of a recursive axiomatic design, the UGP. 

*Axioms are also the main method to create a 'shared reality' amongst INTERsubjective cohorts. This is the way both cults and religions (in fact, any culture) works - control its 'givens', the set of things its members believe as dogma- ie facts that are declared as beyond doubt, sequestered from daily skepticism. Foucault's 'epistemes' are an almost identical concept.  Foucault's research was in the humanities, yet directly concerns the history and method of scientific discovery, especially the negative influence of groupthink on the individualistic pursuit of truth. Power is greatest when it never needs to be wielded. Axiomata / Epistemes are those implicit beliefs of establishment powerbrokers which fly below the radar and are therefore deemed 'off limits' to individuals and ideas who see them as legitimate targets for generational renewal and/or scientific revolution. 

**Governance is the preferred term for guidance or 'control systems'. Governance comprises feedforward command and feedback control.

This page is perhaps the most cryptically named, but for the person who is committed to building this 'cog', it is perhaps the most important. Lets rewind, and ask a very simple question - can the TDE theory be empirically proved in the same way as you might conduct a psychology experiment? The answer is clearly negative - in a 'classic' experiment, great pains are taken to ensure that there is one and only one point of difference between the subjects and the controls.  When the TDE, or indeed any other new theory of cognition is proposed, there are so many points of difference, that the very idea of an experimental control is unviable. That is why I have proposed the adducive* (retroductive) use of axiomata rather than the 'textbook' scientific method of proving a generality (a universal or ubiquitous hypothetical) by disproving (finding a rule-busting exception to) that generality's null hypothesis.

UGP = model creation + model governance = coding + execution [= mathematics + cybernetics]
Using the UGP as an axiomaton to structure an AI or 'cog' is the only way to make plausible synthetic intelligent agents. Like the Turing Machine (TM), its generality is far greater than immediately apparent. For example, what many people fail to realize is that mathematics (offline model building) and cybernetics (on-line model usage) can each do many of the others functions. That is, there is a large amount of overlap:- (a) where greater 'up-front' model building effort can result in more efficient 'back-end' (ie more automatic) code, or (b) where offline modelling is traded for online nested feedback loops. This situation is caused in large part by an educational bias - in western cultures, conventional mathematics, but not cybernetics, is taught in schools. 

Brains treat voluntary (c-machine) processes as if they are structures
In the TDE, this 'axiomatic' (ie we derive it from the superveiling axiomaton) overlap of functions between the feedforward (structure, form) and feedback (process, function) consituents becomes quite important to providing the answer to some basic, although clearly far from simple, questions, such as 'what happens when I will myself to walk across the room?'. The answer can only be the following - so-called 'conscious' movements (actually they should be called 'volitional', or 'voluntary') are managed/governed as a series of relatively slow, synchronous transitions between static postural 'snapshots' That is, our brains treat voluntary (c-machine) processes as if they are (nominally) structures! We already know what kind of FSM we need for this - a Moore machine, a kind of ROM.  Each state in the Moore machine completely determines the next state- actually what this means is the input required for deterministic operation is provided a priori (a choice) by the operator, which is what Turing meant by c-machine. The sequence is INPUT --> PREVSTATE -->{CLOCKPULSE} -->  NEXTSTATE --> OUTPUT.

Obviously, Moore and Mealy machines are logic circuits developed for hardware design. To use them in this context (brains) requires I hope not too much of a leap of faith, just some basic translation of terms. First, what do we mean by a CLOCKPULSE? The answer comes from the fact that the brain's neural circuits (the pyramidal ones, anyway) use meta-inhibitory coding. Like the Bendix air safety brake that must be used on all trains and large trucks, each wheel's brakes have large (local) springs which if left by themselves lock the wheels from ever rotating.  The (global) air pressure system releases the brakes by pushing against the spring, allowing the vehicle to roll freely. When the driver pushes the brake pedal, she operates a air release valve, letting the springs momentarily slow the vehicle down. Braking operates by inhibition (air release) of inhibition (default wheel braking)- hence the term meta-inhibition.

How does this truck/train solution apply to human brains? In our motor circuits we use inhibitory interneurons at a higher level (for example, dopamine is used when the adaptive changes are non-volatile) to interfere with excitatory ones at a lower level. This is, analogically speaking, precisely the same system as we described for the Bendix brake, except we have used the term 'local' to mean 'low-level' and 'global' to mean 'high level'. Same difference. What works for trains works for brains! The same arrangement is used for 'standby' states on televisions and DVD players. In fact, exactly the same thing happens when we put a computer to 'sleep' (suspended animation of its active working state) rather than shutting it down completely. I'll leave this one for the reader to think about. Hint- what is being inhibited, and by what?

The (postural) Moore machine superveils the (reflexive) Mealy machines 
Imagine for the purposes of illustration that a persons locomotor** body consists of a torso with two legs, each with two jointed links, each leg being approximated by a double pendulum. When we learn to walk in infancy, we must first learn to maintain some key intermediate postures for a few seconds, at least. These include, but are not limited to, left foot forward, right foot forward, both feet level, etc. These 'structural' forms are the raw material with which the infant crafts the performance known as 'walking'.

Posture = 'keyframe' and reflex = 'inbetweening' in computer-animation-speak
But how can we link these structural/postural 'islands' within the context of the procedural/reflexive 'sea' which is the highly dynamic act of perambulation? The solution is provided by another FSM called the Mealy machine, also a type of ROM. Unlike the Moore machine***, the Mealy machine is asynchronous.
The sequence is different -  PREVSTATE  + INPUT -->OUTPUT + NEXTSTATE.
Each link manages its own input and output symbols (joint angles) at the local level, asynchronously. The links change state as soon as they receive the new joint angles. Remember, this is all done 'under his eye' - ie governed by the overarching goal of making the global transition from one posture (think posture = 'keyframe' and reflex = 'inbetweening') to the next one. Note that all these mechanisms follow the UGP axiomaton perfectly- a c-machine 'master' superveiling a-machine 'slaves'. 

Consider the following example:- 
Imagine you have been given a very warlike task- to write the code to aim an anti-aircraft gun. The jet planes always follow the same or similar trajectory in the sky, so you decide to measure the curvature of the trajectory, and arrive at a fixed algebraic expression (containing polynomial and sinusoidal terms) which gives a very accurate 'fix' on the planes path. OK, thats the spatial part of the model done. Now for the temporal component. The problem with this module is for the software agent to 'press the trigger' at exactly the right moment.  You find another fixed algebraic expression which gives the exact time interval between the moment when the plane enters the frame of the (digital video) viewfinder and the moment that the gun must be fired, assuming that the flight time of the shell is known. If you have done your job well, you can stand back and watch the whole process unfold by itself. This use of computation is what Turing called an a-machine, where the letter 'a' stands for 'automatic'. Commonly used equivalent terms are 'feedforward' and 'open loop'. Figure 25(a) depicts the top-level axiomaton, called the UGP, or Universal Governance Paradigm. I say 'the' and not 'a' because there can only be one such canonical machine, by definition****. 

Organisation/structure vs Operation/process 
Note that the feedforward module of the UGP is labelled 'organisation' and 'structure'. In the most general of all cases, construction of a framework, a 'chassis' used to preserve the more static, less labile organisational qualities of the system MUST in all cases precede the installation of an 'engine', a CPU, a process or set of processes which execute the more dynamic, volatile and 'operational' qualities of the system / vehicle / subject / agent.

But we have likened the feedforward part of the UGP with the stereotypical role of computer software. This is another way to view software- as a structural component, something that must be set up BEFOREHAND, so that operations can commence. If we let go of the property of solidity and materiality normally associated with the term 'structure', we are left with its 'core' residual character, that of its preparational nature, its foundational ('sine qua non') qualities. So it is with all 'static' mathematical models, their computational complexity is of little matter if they must only be executed once, as a 'startup' item. 

Returning to the gun controller software task, we ask ourself, what happens if only one small parameter changes in the spatial (plane trajectory) or temporal (shell flight time) parts of the model? The answer is, we must do the whole model again, using the 'freshest' input data. Is there another way?  Turing understood that the complementary technique to automaticity (Turing's 'a-machine', a=automatic) is to use human (or robotically equivalent) judgement to track (dynamically monitor, 'watch') the jet plane as it flies across the crosshairs in the gunsight, and then decide when (ie willingly make the choice, as with Turing's 'c-machine', c=choice) to press the trigger, using a conscious estimate of the 'lookahead', ie the distance ahead of the plane that the gun is to be aimed.

Every modern software program contains both parts- (a) the feedforward, static part, usually running at a low level, and (b) the feedback, dynamic part, the user interface which leverages 'analog' ('cybernetic') human judgement, working through the mouse and keyboard in most cases, so as to more effectively govern the overall program direction. 

*this is the 'commonsense' form of reasoning discovered by logician Charles Sanders Pierce.  Although I have only seen the term 'retroductive' used once, I think it is a much better choice than  either of 'abductive' or 'adductive'. 

**Of course, we swing our hands (and our hips) when we walk, and none of it would work without hinged feet for balance and traction, but as a first approximation, a torso with two double-link legs is good enough for now.

***In almost every hardware logic design text, they get you to convert a Moore machine implementation of some device (eg traffic lights or lab rat box feeder) to its equivalent Mealy machine. 

****The reader is asked to challenge this assumption- why is it so universal / ubiquitous?

UGP explains distinction between compilers and interpreters

When I first studied computer science in the 1970's, we used punched holes in cardboard coupons to write programs, one card per line of code, written in FORTRAN. The computer was an IBM 360, I think. Student programs would be submitted in batches of cards, wrapped up in a rubber band, put through the 'hole in the wall'' of the universities DP (data processing) center on a Friday afternoon.  The output (reams of folding computer printout) might be ready on Monday afternoon, if you got lucky. Woe be the unfortunate who had bugs in their code! The people who worked on the other side of the 'hole' wore white lab coats. The idea that we would all do our own computer programming one day never even occurred to us.

Later, as I became more familiar with computers, I became more confident, as well as competent. However, one idea confused me. I found myself flummoxed by the difference between  a compiler, which I knew was a system program that compiled other, user-submitted programs, and an interpreter. The latter was a system 'script' that ran top-level utilities, on a 'line-by-line' basis, according to the user's requests. The user's commands were typed on the 'command line'. Surely compilers consisted of lines of code, and presumably the compiler was executed one line after the other? What about batch files? What about intermediate-level code like javascript. Is it compiled or is it interpreted? What about executing object programs directly by GUI (graphical user interface)? 

Figure 26 portrays the use of the UGP axiomaton in order to understand everyday computation tasks (GOFPC). Figure 26(b) depicts stand-alone use of compiled program while figure 26(a) depicts line-by-line interpretation of system scripts. The two types of use are NOT independent of one another- scripts are imperative system instructions written in a form of simplified pseudo-english. Each line of a script has the form of a headless verb phrase consisting of two basic parts - (i) program invocation (functions like a verb) and  (ii) in-line inclusion of zero or more input parameters (its specifiers and arguments). 

Note that this form of computer-human sentence conforms to a Montague, not a Chomsky, grammar. 

© 2018 Charles Dyer BE (Mech) BSc (Hons)
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started