Stevan Harnad (2001) Symbol Grounding is an End: Analog Processes are a Means. Psycoloquy: 12(053) Symbolism Connectionism (20)

Volume: 12 (next, prev) Issue: 053 (next, prev) Article: 20 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(053): Symbol Grounding is an End: Analog Processes are a Means

SYMBOL GROUNDING IS AN END: ANALOG PROCESSES ARE A MEANS
Reply to MacLennan on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

Analog computation can also be ungrounded (i.e., interpretable as meaning something, but not intrinsically maening anything). Any object or process can be "ungrounded" in that sense. But the symbol grounding problem afflicts symbol systems, and only symbol systems that aspire to implement thinking. The classical theory of computation pertains to discrete symbol systems. There is as yet no theory of "continuous symbols." Language (and the putative language of thought) are discrete symbol systems.

    REPRINT OF: Harnad, S. (1993) Harnad's response to MacLennan.
    Think 2: 12-78 (Special Issue on "Connectionism versus Symbolism"
    D.M.W. Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. MacLennan (2001) suggests that analog computers also have symbols and symbol grounding problems. What I'm not altogether sure of is what he means by "continuous meaning assignments." I know what discrete symbols (like 'chair' or '3') and their corresponding meanings are. But what are continuous symbols and meanings? Or is it 'meaning assigned to a continuum of values of a physical variable,' as in interpreting the height of the mercury as proportional to the real temperature? The case is instructive, because where there is a true isomorphism between an internal continuum and an external one, it is much easier to put them into causal connection (as in the case of the thermometer), in which case of course the internal 'symbol' is 'grounded.'

2. But I take Maclennan's meaning that the interpretations of analog computers' states are just as ungrounded (interpretation-mediated) in their ordinary uses as those of digital computers. I imagine that an analog computer would be ungrounded even if it could pass the TT (despite being, like PAR, immune to the Chinese Room Argument), but to show this, one would have to individuate its symbols, for without those there is no subject for the grounding problem! And if Fodor & Pylyshyn (1988) are right, then under those conditions that analog computer would probably have to be implementing a systematic, compositional, language-of-thought-style discrete symbol system (in which case its analog properties would be irrelevant implementational details) and we would be back where we started. In any case, the TTT would continue to be the decisive test, and for this the analog computer (because of its ready isomorphism with sensorimotor transducer activity), may have an edge in certain respects.

3. MacLennan does take a passing shot at the Chinese Room Argument (with the 'multiple virtual minds' version of the 'system reply') to which I can't resist replying that, once one subtracts the interpretation (i.e., once one steps out of the hermeneutic circle), a symbol system, no matter how many hierarchical layers of interpretation it might be amenable to, has about as much chance of instantiating minds (whether one, two, or three) as a single, double, or triple acrostic, and for roughly the same reasons ('virtual-worlds' enthusiasts would do well to pause and ponder this point for a while).

4. MacLennan also falls into the epistemic/ontic confusion when he writes about how "psychologists and ethologists routinely attribute `understanding' and other mental states to other organisms on the basis of external tests," or how this psychologist 'defines' them behaviorally or that one does it operationally. The ontic question (of whether or not a system really has a mind) in no way depends on, nor can it be settled by, what we've agreed to attribute to the system; it depends only on whether something's really home in there. That's not answerable by definitions, operational or otherwise (Harnad 1001a,b).

REFERENCES

Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: A critical appraisal. Cognition 28: 3 - 71.

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Harnad, S. (2001a) Minds, Machines, and Turing: The Indistinguishability of Indistinguishables. Journal of Logic, Language, and Information 9(4): 425-445. (special issue on "Alan Turing and Artificial Intelligence") http://cogprints.soton.ac.uk/documents/disk0/00/00/16/16/index.html

Harnad, Stevan (2001b) No Easy Way Out. The Sciences 41(2):36-42. http://cogprints.soton.ac.uk/documents/disk0/00/00/16/24/index.html

MacLennan, B.J. (2001) Grounding analog computers. PSYCOLOQUY 12(052) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.052


Volume: 12 (next, prev) Issue: 053 (next, prev) Article: 20 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: