Neural Substrate

Beware the Jabberwock, my son...

Functionalism 101

According to the idea known as 'functionalism', a given top-level computational goal (which could be called its overall or global function) can be implemented as a program on any computer, using its proprietary substrate or platform. In a limited sense, this is certainly true of most computers, as long as the following two things are available- (i) the source code of the program, and (ii) a compiler for the platform.

However, that is not the end of the problem, but rather the beginning of a larger one. Can we say the same thing about minds? Could we emulate the top-level mental functions of a human on a non-human low-level substrate such as a digital computer. That is, would it make any difference if we were not using neural circuits but using silicon chips instead?

Computer scientists' common sense tells us that, IF minds are programs, THEN they conform to functionalism (ie substrate independence of computable functions)*. A belief in functionalism therefore implies that whoever implements all three levels of the TDE will have constructed the world's first true AI, irrespective of the platform they choose to do it on.

Functionalism can be restated as follows - a system's lower-level properties (eg physical behavior) are governed by its upper-level properties (eg its 'program'). Functionalism is therefore the 'dual' or complementary concept of Supervenience, the situation in which a system's upper-level properties are governed by its lower-level ones.

A couple of examples will serve to illustrate these ideas. Consider (modern) money, whose upper level properties (eg its 'face' value) do NOT supervene on its lower-level characteristics, eg the paper, or base metal it is made of. In the (historical) case of gold 'dubloons' or 'sovereigns', their value (a higher-level property) is clearly 'supervenient' upon their actual precious metal content (25 carat), a lower-level property. What makes gold more valuable than sand is the (historical) idea of 'precious', that rarity is synonymous with value. Numerical rarity can be considered as a lower-level property, like number (plurality). We can now restate TDE/GOLEM research findings in functional terms-

IF AI architecture is built from digital logic (hardware) using computer science (software)- AND IF it has the tri-level TDE-R architecture, including three fractal levels of L-lobes ('Amotions') , it will have an inner life, be able to introspect, via the familiar concepts of self, memory (explicit and implicit), and use motivations and emotions to guide those internal decisions that shape its external behavior, not unlike the cognition of higher mammals. FURTHERMORE, the tri-level architectonics, especially the INTERsubjective nature of the third level, constitutes a competent human language speaker and learner via a Language Acquisition Device (LAD). Each TDE forms a linguistic computer (ie it uses Chomsky's i-language), whose built-in hierarchical Montague (dependency-based) semantics is made explicit in the form of the GOLEM diagram, a portrayal that is data-equivalent to the TDE**.

*A different, but closely related question is the opposite - if we can demonstrate a pattern of functionalism in a particular platform, does this mean the platform is a computer?

**What is emulated in the TDE is the OPERATIONAL aspect of cognition, the moment to moment dynamics of an individual mind immersed in thought and implementing behaviours according to these thoughts. What has not been addressed thus far is the ORGANISATIONAL aspect, which includes adaptation (learning), a process which is intimately dependent upon sleeping (see the page called 'Sleep'). As an example of this abstract idea, imagine a 'classical' feedforward artificial neural network (ANN) with all of its weights already (magically) adjusted to their correct level for a given training data set, with no learning algorithm specified, eg we don't know if backprop (or any other viable method) was used. Perhaps the weight vectors were repeatedly randomised until an overall minimum error criterion was achieved (unlikely but theoretically possible).

The fundamental unit of all computation is the provision and localisation of the 'state', which may be implemented as accessible regions of stable, reproducible physical values in any one of a number of viable media, in which there is a clear contrast between the variant and invariant parts. Usually, but not necessarily, these 'states' are grouped together to encode meaningful symbolic representations, such as numbers, letters, words, etc.  

According to TGT, human and animal brains encode state as neural loops which implement cybernetic (ie 'homeostatic') mechanisms. This is not merely a 'syntactic' computer, but a 'semantic' one, because the computational units (the loops) have implicit meaning due to their hierarchical arrangment. That is, the architectonics of this hierarchy reflect the global role of context in the interpretation of symbols represented by the stored states. 

Any simulation which claims to have 'resolved' the unknowns in the study of cognition must not only exhibit general principles of competence, but it must be a buildable exemplar- it must satisfy the Principle Of Specificity* (POS)- exhibit architectonic familiarity eg produce similar artefacts like EEG's, and functionalism, ie produce functionally opaque sub-behaviours, eg those expected by TDE theory. These are to some extent contradictory constraints, but such tension is essential. The TDE could be implemented on a digital computer, as long as the required substrate behaviors are provided- this is what functionalism means.

All of the static, operational**, aspects of intelligence, and indeed, most of self and self-consciousness would emerge and be observable. However, the intelligent agents created would in all likelihood be similar to each one of the two halves of a split-brain (corpus callosotomy) patient. They would have language skills, and some general knowledge, and possess their own working memories, but have little or no episodic (autobiographical) memory. This can be seen in callosotomy (split-brain**) patients, or when a WADA procedure is used to temporarily anesthetize one hemisphere.

There are many computational theories which cover the sort of neural networks found in animal and human nervous systems. There is, however, only one theory (TDE/GOLEM) which describes the (a) electrophysiological and (b) biocomputational reality. Every 'serious' researcher (ie someone who in all conscience believes they have discovered one or more parts  of the puzzle) must, implicitly or explicitly, promote one such 'reality' theory. This principle (the principle of specificity / POS) sounds rather evident, but should be kept front-of-mind whenever comparatively evaluating competing theories. 

This explanation AVOIDS the topical debris field depicted in 'GO figure 13' above.  What this diagram attempts to depict is not information at all, but emotion - specifically, my frustration at the way everyone in the cognitive science field just blithely follows the leader down the black hole of ignorance, without really thinking about it at all.  Is academic advancement and profligate pats on the back by the brain-dead, boldfaced and the botoxed a sufficient compensation for spreading the manure of ignorance over the field?  Your department lawyer will doubtless step in at this stage and politely request that you answer 'no comment'.  

Briefly, here is how it SHOULD be done***. Science is a form of reverse engineering, in which the investigator proceeds RECURSIVELY from the outside in, a form of top-down analysis in all but name. You must FIRST adequately describe the system's behavior, THEN use your generic knowledge of the relevant scientific sub-domain, plus some math and information science, to jump-start the creative process of generating appropriate hypotheses (ie satisficing mechanisms) followed by the discriminative process of rejecting those one finds wanting. 

*The POS relies on the same idea as Newell's Unified Theories of Cognition (UTC)

**In general, systems have two complementary, and to a large extent orthogonal, facets- (a) operation/ process/ function (b) organisation/ structure/ form. Think of a large organisation with many employees. Each employee must satisfy the ORGANISATIONAL requirements of their ROLE (eg you are a department manager) as well as the OPERATIONAL requirements of their ROTE (eg you have certain fixed and repeating tasks such as discipline employees, calculate attendance, negotiate remuneration, etc).

***When patients with life-threatening double-hemisphere epileptic seizures undergo callosotomy, they can go back to their normal lives, with the following caveat- that because naming objects and recognizing them are two separate skills normally merged into one ('apperception'), the patients must learn how to re-integrate them.

****The 'poster child' for this kind of endeavor is the discovery of gravity. First Brahe generated large amounts of data with the newly discovered (by Galileo) telescope. Then he died of the plague. Not wishing to help Kepler (Brahe's apprentice) discover the 'music of the celestial spheres', Brahe's estate (his wife, mainly) gave Kepler what they thought was the 'bad' ( ie the least 'circular') data. Whoops! Kepler found the elliptical shape generated by interplanetary forces in a fraction of the time because of this inadvertent and invaluable filtration of the data. Newton used the elliptical geometry as input to his level of the historical recursive discovery process, resulting in his classic inverse square formula, the correct mechanism to explain the data. Einstein in turn focused upon the errors in Newton's data- that is, the errors generated by his model, computed by taking the difference between his theoretical predictions and the increasingly accurate observations made possible by better telescopes and better analytical formulae and numerical methods (as in Newton's Method - doh!)


Synaptic efficacy - a wrong theory
TQT proposes changing the currently assumed learning model of the cerebellum. Specifically, TQT claims that the cerebellar learning model of Ito and colleagues involving Long Term Depression (LTD) is wrong. Ito proposed that the concurrent activation of climbing fibres and parallel fibres causes synaptic efficacy to decrease on a semi-permanent, ie persistent basis, hence the use of the term 'depression'.
Kawato, a student of Ito's, presents the following overview: "Motor commands generated by this (sic) network induce movements, which are measured by sensory organs. Subtracting this measured state feedback to the brain from the desired state gives the sensory error, which is further transformed into the appropriate coordinates for Purkinje cell outputs. The inferior olive neurons send this transformed error back to Purkinje cells via the climbing fibers".

Kawato's description seems perfectly reasonable, but it is also wrong, because it is based on the LTD model. What if the observed synaptic changes are artefacts of a flawed experimental goal? If the proposed experimental goals are wrong, this assumption produces different errors, ones that are harder to 'see' (and therefore all the more 'insidious') from those that occur when the experimental methodology is flawed. 

If the experimenter expects to see variations in synaptic efficacy, then that is probably what they will report. There is no such thing as a 'synaptic efficacy meter', a gauge whose probe delivers such a measurement directly. Rather, the experimenters use microelectrodes (which are much thinner than a human hair) to measure electrical properties at the cellular level, such as tiny currents and voltages. These values are then 'plugged in' to the best available formula to yield a number which is the best available estimate of synaptic efficacy. Typically, the following quantity is calculated thus-


conductivity = (output current/ Σinput currents) . . . Eq.1

Strictly speaking, 'efficacy' is a term that is found in the discipline of psychology, one of a series of parameters describing an individual's mental/emotional performance. If it is to mean anything in electrophysiology (ie discipline associated with EEG graphs and direct neural measurements), that meaning will in all likelihood involve another formula, one that has as its prime inputs, the conductivities calculated by using Eq.1. Formally...


Synaptic efficacy α {some optimised combination of synaptic conductivities} . . . Eq.2


In other words, in this class of model, the key assumption is that the neuron learns by varying their synaptic efficacy, which means that it (somehow?) alters its membrane permettivity in response to the learning environment in such a manner as to make the post-synaptic neuron less likely to fire than before the change. Furthermore, there must be a reliable function linking the learning situation parameters to the changes in the membrane permetivity, one that is not only predictable but accurate (ie whose error levels have upper bounds).There are many critics of the 'synaptic efficacy' model of neural 'plasticity', a word that connotes systematic (ie meaningful[1]) changes with the property of hysteresis, or irreversibility (ie changes that persist[2] over time). However, the number of critics is dwarfed by the number of scientists who blithely assume the model is true, and/or otherwise use it automatically.


Pyramidal circuits =  crossbar switches

According to Hughling-Jackson's original conception of brain function (1824), and consistent with its rediscovery in 2011 as the TDE ( www.tde-r.webs.com ) , the forebrain/cerebrum constructs spatial trajectories in semantic space in a feedforward (predictive/offline/model) manner. These trajectories are then executed under feedback governance (reactive/online/real-time) by the hindbrain/(cerebellum + basal ganglia)*.  Figure 14 attempts to depict this situation, both in terms of the internal data structures (hierarchies) created by the complementary pair of crossbar circuits, and also in terms of the equivalent anatomic morphology (posture sequence).

Following the work of Cajal and others, these two overarching (top-down, organism-level) functions are controlled by a pair of crossbar switches. Each 'crossbar' uses pyramidal-purkinje/parallel-fibre cell circuits which simultaneously (synchronously) coordinate many thousands of drive-state (cybernetic) variables which would otherwise behave in an individually asynchronous manner. Computationally, this process is not dissimilar to real-time center-of-mass calculations of particle fields (eg flocks of birds), with the caveat that because the system has evolved from cybernetic roots, it is inherently declaratively programmed. Only end-states are specified, with the subtrajectories (those linking the current-states to the end-states) left unspecified. Gubernatorially speaking, this amounts to a form of just-in-time late binding of substate variables.

*In terms of TDE theory, the forebrain functions like an organismic T-lobe, expressing Spatial STATE,  trapped in a single external (spatially homogeneous) slice of time, while the hindbrain functions like an organismic F-lobe, executing Temporal RATE, trapped within a single internal (temporally homogeneous) chunk-of-space. 

Computers and brains perform off-line modelling and on-line governance using identical mechanisms 

The 1977 paper of Schneider & Schiffrin* is much-loved by GOFAI theorists, because it proves that mental processes are identical in form and function to the multi-tasking behaviour of the GOFPC (good old fashioned programmable computer).  There is no mystery behind this finding- rather, it demonstrates that both mind and computer cannot avoid being virtual versions of physical mechanisms, that when we execute computer programs or plan our own actions, we perform imaginary copies of real-world actions, like using our hands to pick up objects and manipulate them, or use our legs to walk between locations. To believe otherwise is to cast doubt upon Sir William of Occam, who is as close to a secular saint as such a contradiction would admit. 

*Schneider, W.& Shiffrin, R.M. (1977Controlled and Automatic human information processing. Psych. Rev. 84; 1-66

© 2018 Charles Dyer BE (Mech) BSc (Hons)
Powered by Webnode
Create your website for free! This website was made with Webnode. Create your own for free today! Get started