Stevan Harnad (2001) Implementation-dependent Computation is not Computation. Psycoloquy: 12(047) Symbolism Connectionism (14)

Volume: 12 (next, prev) Issue: 047 (next, prev) Article: 14 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(047): Implementation-dependent Computation is not Computation

IMPLEMENTATION-DEPENDENT COMPUTATION IS NOT COMPUTATION
Reply to Hayes on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

Hayes (2001) suggestes that implementation-independence is not "absolute." This will have to be worked out more precisely, because, on the face of it, this seems to contradict classical (Turing) definitions of computation. For the time being, though, a sensorimotor robot is not just a computer, on any definition of computation, nor is any transducer of energy, just as a plane is not a computer, nor is a furnace. So just as a plane is no longer flying (hence no longer a plane), nor a furnace heating (hence no longer a furnace) if you remove their respective motoric and thermal transducers, a mind is no longer thinking (hence no longer a mind) if you remove its sensorimotor transducers, no matter what computational hardware or software you leave in place.

    REPRINT OF: Harnad, S. (1993).Harnad's response to Hayes. Think 2:
    12-78(Special Issue on "Connectionism versus Symbolism" D.M.W.
    Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. Hayes (2001) raises a number of interesting points. He continues to argue (see Hayes et al. 1992) that Searle is not an implementation of the TT-passing program (Searle 1980), even though:

    1. Searle uses and steps through exactly the same code (and,
    remember, it matters not a bit whether Searle does this at a higher
    software level or at the rock-bottom machine-code level)

    2. Searle's performance is TT-indistinguishable from the computer's
    (till doomsday, in principle).

2. Maybe there is something about the magic of real implementation (through mindless, mechanical-loom-style pattern matching) such that it is capable of generating a ghost in the machine only when there is not already a ghost in residence, performing the pattern matching! To me, this sounds like bad news for implementation-independence -- and also like a lot of mentalistic special pleading about what counts as an implementation when that should all have been settled in advance, mind-independently, before computation (which, on the face of it, has nothing to do with mentation) ever became a challenger in the mental arena.

3. I agree that 'interpretation,' in the sense of rule-governed physical pattern matching (as in a mechanical loom or digital computer) is not the same as the conscious interpretation of syntactic symbol manipulation rules by a person. But it's the execution of the manipulations that we are equating here, not the 'interpretation' in either of these senses. And the sense of 'interpretation' that we are actually aiming for is yet a third one: the sense in which thoughts are meaningful (and ungrounded symbols, undergoing manipulation, no matter by whom or what, are not).

4. Never mind. Let us concede that if Hayes can ever give a nonarbitrary criterion for what does and does not count as an implementation of the same software among otherwise Turing indistinguishable, Turing-equivalent and even strongly equivalent 'implementations' ('virtual' ones, shall we call them?) of the same symbol system, then the Chinese Room Argument will have to be reconsidered (but probably so will a lot of the computationalism and functionalism that currently depends on the older, looser criterion).

5. I do have to point out, though, that there is a difference between a computer being connected to peripheral transducers (cameras, say), and the computer's being those transducers (which it is not: a computer certainly consists of transducers too, but not the transducers that would be a robot's sensorimotor surfaces; those are the kinds of transducers I am talking about). This is not just a terminological point. My own grounding hypothesis is that, to a great extent, we are (sensorimotor) transducers (and their analog extensions); our mental states are the activity of sensorimotor transducers (which are part of an overall TTT-capable system). Their activity is an essential component of thinking states. No transducer activity: no thinking state. There is no way to 'reconfigure' an all-purpose computer, one that can implement just about any program you like, into a sensorimotor transducer -- except by adding a sensorimotor transducer to it. That, I take it, is bad news for the hypothesis that thinking is just computation (if my transduction hypothesis is right).

6. Because I'm interested in mind-modelling and not just in machine virtuosity, I have singled out TTT-scale grounding as the empirical goal. One can speak of a digital camera as 'grounded' in a trivial sense: the internal computational states in such a 'dedicated' computer are indeed 'bound' to certain external energy configurations falling on its transducer surface, and not just as a matter of our interpretations. But such trivial grounding does not justify talking about the camera's having 'beliefs'! Only the TTT has the power to match the complexity and to narrow the degrees of freedom for the interpretation of its internal states to something that is commensurate with our own (and I agree with Hayes that the expressive power of natural language, a subset of the TTT, may well loom large in such a system). Otherwise we are indeed talking metaphor (or hermeneutics) rather than reality.

REFERENCES

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Hayes, Patrick, Harnad, Stevan, Perlis, Donald and Block, Ned (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2(3):217-238. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/85/index.html

Hayes, S. (2001) Computers don't follow instructions. PSYCOLOQUY 12(046) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.046

Searle, J. R. (1980) "Minds, brains and programs." Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html


Volume: 12 (next, prev) Issue: 047 (next, prev) Article: 14 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: