David M. W. Powers (2001) A Grounding of Definition. Psycoloquy: 12(056) Symbolism Connectionism (23)

Volume: 12 (next, prev) Issue: 056 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(056): A Grounding of Definition

A GROUNDING OF DEFINITION
Commentary on Harnad on Symbolism-Connectionism

David M. W. Powers
Informatique, TELECOM PARIS
46, rue Barrault
75634 Paris cedex 13, FRANCE

powers@inf.enst.fr

Abstract

The symbol grounding thesis rightly highlights that ungrounded semantics, based on dictionary-like definitions of symbols in terms of symbols, is inherently circular. Nonetheless there are some irrelevant distractions. In particular, it is argued that it is irrelevant whether a system is connectionist or symbolic, as either can simulate the other - both are equivalent to a Turing Machine. Furthermore, the distinction of digital vs analog is a red-herring given that in practice both are limited in resolution - again one can be simulated in terms of the other. Similarly, the question of parallel versus serial hardware does not affect the power of the machine - we can distinguish concurrency implemented using timesharing from true parallelism, but the two implementations are Turing equivalent. In all cases, there is a cost in simulating one kind of system in terms of another, but this only affects efficiency, not efficacy.

    REPRINT OF: Powers, M.W. (1993) A grounding of definition. Think
    2:  12-78 (Special Issue on "Connectionism versus Symbolism"
    D.M.W.  Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

I. INTRODUCTION

1. It is ironic that the Symbol Grounding, Connectionist/Symbolist and Chinese Room/Artificial Intelligence debates hang so much on definitions -- Symbol Grounding points out that definitional systems (the sense in which symbolic systems are used in that context) are severely limited: they are inherently circular. Similarly, Cognitive Linguistics points out that objectivity is prejudiced by the metaphor-like mechanisms which underlie both syntax and semantics, these being the mechanisms which are responsible for grounding (Lakoff, 1987; Johnson, 1992; Powers, 1992b).

2. In Cognitive Science, people with different backgrounds have definitional systems grounded via different metaphors. This is perhaps most obvious in relation to discussion of analog systems and signals, but also in terms of definition of computation or mind. There is also a problem that some bandy terms like 'Turing equivalence' and '(Universal) Turing Machine' with only a very vague (and often incorrect) notion of what lies behind them (and even that they have nothing to do with the Turing Test). Even Harnad paragraph 20 is unfortunately expressed so as to gloss over the distinction between the programmed behaviours being identical and the capabilities of the underlying machine (substrate) being the same, irrespective of program.

3. So I would like to address some of these definitional problems here, picking up on Harnad's discussion in Sections I and II.

II. COMPUTATION AND INTERPRETATION

4. Harnad first addresses the `hypothesis' that cognition is computation, which forms the one side of the 'computationalism vs. connectionism' chasm. A major problem is that people from the computationalist camp don't see it as a hypothesis, but as self-evident and axiomatic. The question is, for them, not `if' or `whether', but `how' and `when', computational will succeed in achieving the various forms of cognitive behaviour: given the Turing equivalence of computational and connectionist models (Harnad section II), the debate is simply ridiculous and reflects a lack of understanding of the nature of computation on the part of those who see the chasm. What real alternative is there to computation as a model of cognition?

5. Harnad clarifies what is meant by computation as follows:

    "manipulation of physical `symbol tokens' on the basis of syntactic
    rules that operate only on the `shapes' of the symbols (which are
    arbitrary ...)" (para. 2);

    "implementation independent: whatever properties or capabilities a
    system has purely in virtue of its computational properties will be
    shared by every implementation ..." (para. 3);

    "the symbols and symbol manipulations in a symbol system are
    systematically interpretable... they can be assigned a semantics
    ..." (para. 4).

This is an important foundation for the discussion, although expressed in non-computationalist terms. Moreover, there are riders: "operate only on the shapes", "purely in virtue of its computational properties", "can be assigned a semantics". These provide possible loopholes which lead us into controversy. First, does the substance or the nature of a neural net add capability beyond its computational properties? Two possible answers emerge in Harnad's article: parallelism and analog processing (para. 6.3 and para. 20). Second, can nets operate on anything other than the shapes which the representation permits? Harnad's suggestion is that the mechanisms of interaction with the outside world, the transduction, may somehow be qualitatively different for the nets (see e.g. paras. 23, 25 and 30.5). Finally, can a systematic interpretation, a semantics, be totally internal, in terms of relationships between one representation or language and another, all represented, in the last analysis, with the same symbols (e.g. bits)? Symbol Grounding says no (by definition interpretation implies representing relationships across different systems), so we're back to the transduction question!

III. IMPLEMENTATION AND EFFICIENCY

6. These questions take us into the area of implementation. Harnad's answers to them are probably not, probably and no. Mine would be no, no and no! Note that the final agreement on NO also allows room for some differences: can a system be indirectly grounded, through a user or programmer, or by copying a brain state, or by isolating the brain from its sensory-motor environment. Once there is no sensory-motor connection, the question becomes academic -- it then aptly fits Harnad's comparison with a stone, there is no discernible difference between mind and stone when there is no communication.

7. I wonder whether the differences on these questions relates to what we mean by `capability' or `can'? These hide theoretical and pragmatic questions which I would like to elucidate by distinguishing between efficacy and efficiency. Parallel and analog systems may be faster, and thus able to achieve something digital computers may not, simply because a million neurons working in parallel may be able to achieve more than a single CPU working a million times faster with operations a thousand time less complex or less relevant. Such a computer can simulate the neural net, at a cost of a thousand operations times a million neurons. A couple of billion microseconds is of the order of an hour, a couple of thousand milliseconds is just seconds. But in terms of achieving the required result, they are equivalent, and if we could build a serial computer a billion times faster, we could achieve the same result in real time.

IV. TRANSDUCTION

8. Considering transducers, neural networks provide a model which takes us right through to retina or cochlea, effector or sensor, which are implemented using the same cellular stuff. But silicon technology also extends right through to the sensors - even the retina makes use of special pigments, and the cochlea special hairs. No sensors are just ordinary neurons though, as there is no such things as ordinary neurons. There are many different sorts, finitely many, but many more than are reflected in any connectionist implementation I know of. Given the right symbolic computer program, Harnad's TTT robot could be built totally from mechanical/chemical/electronic components within our current technology, apart from questions of efficiency (in which I include all the tradeoffs of speed, size, resolution and complexity). Anything human sensors measure, from smell to temperature, can be measured even more precisely with current technology - but digital or analog thermometers, not to mention electron microscopes, gas chromatographs and the like, don't yet fit in the space occupied by the average neuron.

9. In neural and electronic cases precision is lost at each level of transduction or processing. The intercorrelations in the physical vision processes may possibly occur more remotely from the transduction in robot vision, or in more limited fashion, but there is no reason why it cannot be emulated adequately to achieve comparable behaviour, forgetting about practicalities of size and speed (viz. efficiency).

V. MOTIVATION

10. Harnad also raises (para. 5) the question of whether the goals of researcher in the direction of either Artificial Intelligence or Cognitive Modelling will make a difference in our perspective on these issues. They certainly do, but they don't change the issues. If for purpose of Cognitive Modelling, the focus is developing systems which are accurate at some deeper level than the surface behaviour, then certain possible AI systems will automatically be excluded. Some phenomenon of this sort certainly does appear to present in this type of debate, when systems exhibiting identical behaviour (e.g. Searle himself, versus a Chinaman in a room emulating him), are judged to have 'differently computational properties'. The `charitable assumption' by which we judge others to be like ourselves, innocent until proven guilty, would let us attribute a mind to a TTT robot, given that we couldn't, and wouldn't have any basis to think otherwise -- irrespective of what Searle in the driver's seat thinks about the matter -- irrespective of whether it was designed by a connectionist or a computationalist -- irrespective of whether it was designed as an exercise in Machine Intelligence or Cognitive Science.

VI. DISTRIBUTION AND CONCURRENCY

11. Parallel Distributed Processing has come to be synonymous with Connectionism, but Parallel Processing and Distributed Processing go back well beyond the coining of either term. All vision systems, whether symbolically motivated or connectionist, must examine relationships between the pixels in a region in order to extract features. They perform systolic algorithms to extract outlines. They look for relationships between different regions to track movement or make comparisons. They are increasingly being implemented on parallel systems in which the goal of one processor per pixel is the ideal being approached. Weather simulation is another such area, or wind-tunnel and aerodynamic simulation.

12. Statistical techniques are also proving surprisingly effective in relation to Natural Language, Speech Recognition and Machine Translation (see Powers, 1992c). These techniques seek out correlations in a way which is very similar to the correlating effect in neural nets. A lattice of possible choices for individual words may admit a multitude of possible parses, which are evaluated in parallel.

13. Harnad mentions parallelism, REAL parallelism, as one aspect of connectionism which has been proposed as a candidate to explain the expectation that connectionism will succeed in cognition whereas symbolism must fail.

14. I would rather point the finger at the distributional aspects, and the logical parallelism which I will call concurrency, as it is of no account whether it implemented with real or simulated parallelism. The real neural networks which do our computation for us implement distributed concurrent processing with real parallelism. But concurrency and distributed processing are being investigated in the context of symbolic processing too. I have been working with Concurrent Logic Programming Systems (one based on a connection graph theorem prover) and have implemented both connectionist and conventional programs for machine learning of natural language in this context (Powers, 1988; 1989).

15. If there are multiple interactive tasks to perform which are naturally concurrent, that is necessarily overlap, then parallelism, real or simulated, is absolutely necessary to meet the specifications. Of course such parallel simulation capability is built into every timesharing operating system or environment (like UNIX, or Windows, or X-Windows). Moreover it is becoming a standard part of conventional languages (e.g. it was designed into the languages SIMULA and ADA, and is possible in most modern PROLOGs). Furthermore, there is virtually no computing environment today that doesn't allow or require peripheral processing to occur in parallel with central processing: that is, the peripherals interrupt the current process with a priority dictated by their speed and the urgency with which they need to be serviced (e.g. even PCs and Macintoshes have this sort of concurrency as standard).

VII. COMBINATION AND CORRELATION

16. It is true to say that processing of distributed relationships is necessary for cognition, as for vision. Lots of separate pieces of information need to be combined or related. Symbolic systems tend to put the emphasis on combination, and connectionist on correlation, but it is not a hard and fast rule.

17. In practice, my experience is that neural correlations do admit a symbolic analysis, in terms of which particular neurons or synapses can be identified as labelling particular patterns or implementing particular rules (Powers, 1989). Moreover, an information theoretic analysis, and a consideration of the cognitive mechanisms available, does suggest that this should be expected (Powers, 1992b; 1992c).

VIII. CHALLENGE

18. Any neural net implementation of an arbitrary cognitive capacity can be implemented at least as efficiently in a non-connectionist implementation on the same sequential hardware. My experience here is that the connectionist systems are easier and faster to program, and indeed shorter `programs', but that the symbolic versions are easier to tune, and faster to execute, and indeed require less memory.

19. I believe the neural implementation can't beat the conventional because (as they say in chess circles, 'assuming best play from black'):

    1.Neural nets are based on a fixed set of higher level
    `subroutines';

    2.Given the generic neural network runs on the hardware, we can
    optimize our application specifically in ways which will hide the
    neural network origin;

    3.We can analyze the behaviour of the net and obtain the
    correlations more directly, with efficient indexing, etc.

    4.The neural network would provide distributional compositional
    properties and expose symbolic substructure;

    5.I can't lose the bet unless neural network X is an optimal
    solution to the problem, and even then I could reframe it in terms
    of another model.

In other words, once we are down to efficiency as the only grounds for distinguishing computational and connectionist models, neural networks are just one particular computational model, and while this model is being simulated on conventional computers, the race just isn't on!

IX. MINDS AND ANALOGS

20. Finally I want to come back to the definition implicit in this debate. Turing asked the question `Can a computer think?' Searle and Harnad have changed it to `Can a computer have a mind?' Note the different nuances. After 50 years of computers we are far more inclined to apply the word `think' to them, whether by virtue or metaphor, concession or charitable assumption. Mind, soul and spirit have been nebulous, even nefarious, words for centuries. The phrase `electronic brain' for computers has been and gone.

21. But there is more to the change of wording than this. Mind focuses on consciousness in a far more direct way than thought. Turing addressed thought in terms of whether the computer could hold its own with people in terms of particular sort of problem solving, and was indeed far ahead of his time in recognizing that it was the `simple' aspects of intelligence, like language, that were going to be more difficult than the `advanced' intelligence reflected in, for example, chess playing. Searle changes the question to whether the computer is conscious of itself, and maps this down to an assumed homunculus.

22. The shift towards making a digital-analog distinction also hides some misconceptions and changes in definition. In Turing's days, we had analog computers which `reasoned' in ways which contrasted with digital computers in two respects.

    1. Digital systems counted in some number system, and represented
    things with a finite number of symbols (the base of that number
    system, two as a rule today). Analog systems manipulated
    `continuous' functions (but were still faced by limits on
    resolution).

    2. Analog systems got their name because they worked by analogy,
    that is to say lengths or levels implemented in one way
    (electrical, hydraulic, etc.) were used to represent variables of a
    totally different sort (e.g. cannonshot weights and ranges).
    Digital systems represented values numerically, and couldn't
    compete in speed against analog computers well matched to the
    problem (but the discussion has also lost sight of the fact that
    analog systems were used as simulations of other systems).

23. So what role does Harnad assign the term analog in paragraph 6.3? He is referring to the continuous nature of the input and output of neurons. But there is no such things as continuous at the quantum level. Neural processors are mediated by exchange of ions and neurotransmitter molecules, and deviate from the idea at an even higher level. Continuous signals tend to drift, references or states formed by complex interaction functions are necessary for stability. Even chaotic behaviour, as experienced in unstable systems, can be simulated digitally (e.g. mathematically).

24. My conclusion is that this is a total red herring. In any case, present neural net implementations are overwhelmingly digital.

X. ANALOGY AND METAPHOR

25. However, I do believe that analogy plays an important role in the significance of connectionists network for cognitive modeling. The particular range of grammatical and semantic structures we have reflect many structural similarities which result from the commonality of mechanisms. The fact that we can use a word in many different contexts, ranging from the most concrete to the most abstract, but still mean the same thing, is a reflection of the similarity of the relevant representations. For example, consider `in' in `in the room', `in an hour', `in my mind' (see Lakoff, 1987).

26. Note that the TTT can directly test comprehension of a word like `in', but the TT can only test it indirectly. But this advantage disappears rapidly as we move from the concrete to the abstract domains of application. Of course the TT is a subset of the TTT (which was proposed by Harnad as a generalization of the TT), and as a special case can be passed by any system capable of passing the TTT. From the point of view of symbol grounding, the point is really that TTT capabilities are necessary to pass the TT. Of course, such a system when disconnected from all its sensory-motor periphery, and allowed just teletype communication, is no longer a TTT-capable system. Similarly, grounding in a virtual reality system (like MAGRATHEA (Powers, 1989)) can theoretically produce a TTT- capable system. Given the virtual reality is accurate enough, it should be possible to unplug it from the virtual reality and connect it up to real reality and have it pass the TTT, or disconnect it entirely and have it pass the TT, or have it pass some sort of Virtual TT (VTT) in which it is pitted against a human in the same virtual reality.

27. Again, technically, a system can be grounded by including in the system a Searle who in this case simulates not the CPU but the PPU, the Peripheral Processing Unit. This Searle translates between his sensory-motor experience and some representation languages understood by the program. This is the mode which Natural Language researchers have traditionally worked in. While it is theoretically possible, it is practically impossible, not only because of the sheer information load on the Searle (or the team of programmers/Searles), but because of the dynamic nature of our environment. The traditional programming approach is not adaptive, and hence incapable of producing systems capable of passing either the TTT or the TT, which allow, for example, my teaching an English TT Chinese!

28. The virtual reality approach is also, in practise, only a bootstrapping convenience because it is easier to program certain laws of reality than it is to provide by hand scenario after scenario. Thus while a VTT-passing robot should also be capable of passing TTT and TT (given the appropriate replugging), it will in practise eventually come unstuck somewhere along the line simply because the virtual reality simulation isn't accurate enough (but theoretically there is no reason why it couldn't be -- the trivial observation that it has to be implemented on a computer of finite size located in real reality is irrelevant, because the experience of the human opponents is also gained in a subset of real reality which they succeed in modeling adequately).

XI. CONCLUSION

29. For these reasons, I do expect that successful TT-passing systems will have TTT or VTT capabilities (and moreover their learning capabilities will not be limited to just language). Furthermore they will have representations which have a high degree of correspondence with those which are responsible in the neural circuitry of our brains. But whether the first such systems are labelled connectionist or not is quite another question. What is certain is that they will be adaptive and capable of learning both language and ontology.

REFERENCES

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Johnson, M, (1992) 'Philosophical implications of cognitive semantics', In: Cognitive Linguistics Vol 3 No. 4, Mouton.

Lakoff G, (1987) Women, Fire and Dangerous Things: What categories reveal about the mind,University of Chicago Press.

Powers, D. M. W. and Christopher C. R. Turk, (1989)Machine Learning of Natural Language, Springer.

Powers, D. M. W., (1991) 'How far can self-organization go? Results in unsupervised language learning', in D. Powers and L. Reeker (eds.), Machine Learning of Natural Language and Ontology, Proc. AAAI Spring Symposium, DFKI Document D91-09, DFKI Kaiserslautern FRG.

Powers, D. M. W., (1992) 'A Basis for Compact Distributional Extraction', In: THINK Vol 1 No. 2, ITK University of Tilburg.

Powers, D. M. W., (1992a) Multi-Modal Modelling with Multi-Module Mechanisms: Autonomy in a Computational Model of Language Learning. ITK Research Report 33, University of Tilburg.

Powers, D. M. W., (1992b) 'Parallel and Efficient Implementation of the Compartmentalized Connection Graph Proof Pocedure: Resolution to Unification', in B. Fronhofer and G. Wrightson (eds.), Parallelization in Inference Systems, Proc. Workshop on Massively Parallel Inference Systems 1990, LNAI 590, Springer.

Powers, D. M. W., (1992c)'On the significance of closed classes and boundary conditions: Experiments in lexical and syntactic learning' in W. Daelemans and D. Powers (eds.), Background and Experiments in Machine Learning of Natural Language, ITK Proceedings 92/1, University of Tilburg.


Volume: 12 (next, prev) Issue: 056 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: