Reverse/Compilation

 the frumious Bandersnatch.

the toxic nature of the objective stance 

Another way of describing globally available data is objective. The idea of objective or global access to privileged mental data such as memories or teleological purpose is biologically implausible. The debates around teleology (ultimate purpose of systems) that have bubbled away in the background for at least a century have arisen precisely because of the failure to understand the toxic nature of the objective stance. 

simulate objectivity with INTERsubjectivity

As we have seen, the mind is clearly subjective through and through. Notwithstanding all of the psychophysical experiments about correlates of consciousness, we know this in an informal way by means of personal introspection, which tells us that our 'self' is located inside some kind of life-support capsule, a container called our body, equipped with windows (distal senses), a control dashboard with gauges (proximal senses) as well as switches (many, mostly automatic) and control levers (few, mostly voluntary) which operate robotic grippers and tracks (arms/hands and legs/feet). Uexkull's solution to the identity (ie mind-body) problem is to divide the subjective space into three realms, with German names- innenwelt, umwelt and umbegung. These correspond with minor differences only to the three TDE levels, INFRAsubjective (TDE1), INTRAsubjective (TDE2) and INTERsubjective (TDE3).

The first instinct and easy temptation of programmers is to play god (thematically) by building large global (objective) 3D models of the robot's world, without thinking about how they can be maintained. As soon as the robot's world changes, it (or rather, its proxy, the system/services programmer) is faced with a Hobson's choice* - should all world state updates be incorporated into the global knowledgebase? Also, should they be included as soon as they occur, no matter how trivial? Who can tell in advance which change is trivial and which one is critical?  This question leads to the 'frame problem'.

PCT is the only solution to the ACE of global 3D world models

This approach is not only futile because of ACE**, it ignores the robot's own reality. Like people, robots view the world through sensory windows with limited bandwidth, limited refresh rate. They simply cannot control all the low-level stuff - how can one coordinate all these inputs and outputs, so as to maintain machine identity and structural integrity, but most importantly, how does the robot get what it wants? This issue of the centrality of governance, the maintenance of one single locus of control, a gubernatorial center of gravity, if you will, necessitated the introduction of Perceptual Control Theory, or PCT.

Moreover, if we re-examine the compilation view of intelligence, we see that it is subjective too! The compiler prepares a set of 'behaviors' (program executions) based not on its objective analysis of the 'world' (computer platform), but based on the subjective view of the programmer's needs and wants. The programmer sets out a finite list of goals, and as long as the compiler uses this finite, subjective viewpoint as input data, it (usually, after debugging) succeeds. 

The very idea of getting the compiler to create sets of potential behaviors directly from objective knowledge of all the situational contingencies in the world is unviable. Defining intelligence as compilation suggests another method of achieving the same overall goal, because compilation implies an INTERsubjective way of defining global understanding. Though initially limited by each subject's own capacities for behavioral execution and situational appreciation, the INTERsubjective stance is nonetheless able to draw from multiple subjects, via ongoing use of language (TDE3), since each subject experiences an identical perceptual mechanism. Subjects differ only in terms of relative location - a group of spatially equivalent subjects will have identical perceptual control algorithms.  This idea aligns naturally with so-called alternative (now becoming somewhat mainstream) schools of thought about intelligence, such as those involving affordances (defined by Gibson as 'object possibilities for action'), and embodiment (Lakoff - 'mind is metaphor':- there is no such thing as a completely abstract idea, everything is grounded in perception and experience).

*meaning no real choice at all- when fellow Flinders U alumnus Rodney Brooks first came face-to-face with this problem, his approach was, apparently, to give up on true AI, which as we have seen is top-down 'reality compilation' and instead explore the murky world of bottom-up A-squared-I-squared, using 'insectoids'. In terms of software, this amounts to parallel code compilation, creating effectively 'headless' arrays of independently operated (but physically conjoined and therefore virtually co-located) triggered action (reflex)  modules. We recall from undergraduate studies that compilers come in two flavors, top-down  and bottom-up.

** Algorithmic Complexity Explosions (ACE). AI suffers from ACE, BI doesn't. Why? A large part of the problem lies in the temptation for engineers to view information and data as globally available. This may be OK for computer programs of a 'certain' size (say < 10,000 lines), but it is not OK for AI programs which are not only big (as in have many lines of code) but are heavily branched (have many conditional options to consider).  

There are clearly two ways of using the concept of intelligence. The first corresponds to forward compilation. This meaning equates intelligence with behaving intelligently, ie acting and reacting in such a way so as to get what you need by searching for what you have previously wanted (ie the things which have satisfied you) in the past, amongst the opportunities currently available. Some simple reasoning may be used. Conventional solutions (programmed computing, or GOFPC) implements these functions, as in figure 18(a).

The second corresponds to reverse compilation. This meaning equates intelligence with perceiving intelligently, ie understanding why things happen the way they do by reverse engineering other subject's (and objects) behaviors, resulting in knowledge about third party drives. This is the type of intelligence which includes more familiar AI activity, such as complex reasoning software and pattern recognition hardware (eg CUDA).  GOFAI has tried to implement these functions, with varying amounts of success, as in figure 18(b). The above analysis demonstrates that there is no theoretical reason why approaching GOFAI from a subjective stance should not succeed. Indeed, TDE theory says that is how BI does it.

Reverse Cognition is inference (induction) of thought from behavior. 

We define Reverse Cognition as inference (induction) of thought from behavior. Reverse Cognition (RC) is equivalent to Retroductive reasoning*, first described by American logician Charles Sanders Pierce. RC is achieved in practice by storing the (Cognition, Behavior) pairs in long-term memory thus:- {(C1,B1) , (C2,B2) , ........ (Cn,Bn)}.  When the subject must reverse compile an observed behavior, Bj, it finds the matching Cj by means of interpolation between adjacent function points Ci and Ck - see figure 19(a). Mathematically, this is equivalent to finding the inverse of the cognition-behavior mapping function. An inverse can only be found for a true (1:1) function, not an ambiguous (1:n) relation. If an insufficiently large memory dimensionality has been chosen, a function will look like a relation, and possess several y-values for a given x-value. Conversely, to convert an ambiguous relation into a uniquely mapped (ie 1:1) function, one need only increase the number of salient dimensions, which involves adding one or more new features- see figure 19(b). They can then be divided into uniquely mapped combinatoric groups.

Another, entirely equivalent description is to model symbols with functions. We implicitly assume that tokens are identical copies, but this need not be the case.  Figure 19(b) depicts a relation (upper graph) that has been converted to a function (lower graph).  We can now map the linguistic concept of the unique, unambiguous, symbol to the idea of a mathematical function f (lower graph).  We can similarly map the linguistic concept of the token (symbol copy) to the idea of a mathematical relation (upper graph)** . This is a parametric method of 'switching' between symbols and tokens. In other words, this is a way to make tokens without copying. In fact, this parametric method of token production by dimensional reduction is arguably more plausible from a programmer's viewpoint. It is also more compatible with call-by-reference subroutines (ie subroutines which have call-by-reference variables for input and output), such as those used by 'firmware' ROM's. By comparing a group of tokens to a relation, we separate the idea of a symbol (whose content y=f(x,z) we can vary by means of varying index x) from the idea of the symbol's tokens (we can switch between token instances by varying context z). By suitable variation in parameter set (x,y ...z), eg swap y for z, the total symbol dimensionality may be 'sliced' up to suit the required tokenisation hyperplane (iteration loop variable). 

RC = RI

Interpolation in the forward direction is possible with both functions (1:1) and relations (1:n) , but is possible only with an inverse function (1:1), since it cannot be defined over an inverse relation (n:1). Interpolation in the reverse direction is called Reverse Interpolation (RI). Reverse compilation is mathematically equivalent to RI.  Therefore RC is equivalent to RI of behaviors. We have seen that the mind interpolates subjective effects (conscious percepts) in between saccades (animated keyframes), as in Libet's red and green light experiment. Our subjective mind observes the starting and terminating conditions only as the  effects to be explained. It uses RC to propose a plausible causal mechanism, one which makes excellent sense in a 'normal' world,  ie  not in a University Psychology laboratory! The adduced cause is that the light starts out red, then morphs smoothly into green, apparently in mid-air, as it travels from the location of the red light to that of the green one. Some fairly straighforward arithmetic interpolation has clearly been computed, as in figure 19.

*Originally named Abduction, but changed for obvious reasons. Retroduction (or Adduction) is the process of forming (optimal) explanatory hypotheses, according to Pierce. The term 'optimal' in brackets has been added by later philosophers of science. Though it is not a part of Pierce's original ideas, it has been found necessary.   

**A great deal of linguistic theory is dedicated to examination of the symbol-token (a.k.a. 'type-token' distinction). Linguists regard words as symbols/types. We must be also careful to distinguish between tokens and occurrences. The Stanford Encyclopedia of Philosophy gives a good overview of this topic.

Any robot which claims to possess human-like powers of speech must (a) know what words mean - words are encoded general knowledge, semantic generics (b) be able to put them together - individuals (humans, robots, whatevers...) use the potential universal meanings of words to form semantic instances (what I mean here and now). (c) clearly, it is AS MUCH a matter of keeping track of symbol-token (type-token) distinctions at multiple levels of abstraction AS IT IS a matter of looking up dictionary meanings of words, understanding grammatical constructions etc. Some linguists include this type-token tracking as part of pragmatics. Whenever animals use symbols, eg when a tiger sees the footprint of a deer, it presumably visualizes itself, in the near future, chowing down on a deerburger.  The Tiger's use of symbols and tokens is implicitly regulated by its cognitive governance of self-in-world models. That is, the hierarchical nesting of behavior and imagination models is an essential part of cognition. Surely the parallel governance of linguistic types and tokens in humans occurs in a similar manner.  

In the TDE model, this task is achieved parametrically, not out of any theoretical constraint, but because the neural substrate uses call-by-reference mechanisms in its linguistic operations. A word is represented by a node (and its subsidiary sub-hierarchy) in a semantic hierarchy. At the semantic (not neural) level, each sentence consists of a sequence of catenae (descending or ascending links) joining the word nodes, so a sentence constructed of words is equivalent to that catenae chain.

Specifically, the meaning of a message (backward pass) is obtained by following the ascending catenae, while the message of the meaning (forward pass) is obtained by following the descending links. Clearly, there is only one meaning to most messages, because we are linking parent nodes, but there are many possible messages to a meaning, because we are choosing children of parent nodes.

© 2018 Charles Dyer BE (Mech) BSc (Hons)
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started