Stevan Harnad (2001) Grounding: Interpreter-independent Causal Connections Between Symbols and Objects, Turing-scale. Psycoloquy: 12(059) Symbolism Connectionism (26)

Volume: 12 (next, prev) Issue: 059 (next, prev) Article: 26 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(059): Grounding: Interpreter-independent Causal Connections Between Symbols and Objects, Turing-scale

GROUNDING: INTERPRETER-INDEPENDENT CAUSAL CONNECTIONS BETWEEN SYMBOLS AND OBJECTS, TURING-SCALE
Reply to Roitblat on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

Roitblat (2001) thinks grounding has something to do with "specifying the premises of formal arguments." I think it has to do with causally connecting symbols to what they are about, directly and autonomously, rather than through the mediation of an external interpretation: My candidate is causal grounding via whatever internal resources it takes to make a robot successfully pass the Total Turing Test.

    REPRINT OF: Harnad, S. (1993). Harnad's response to Roitblat. Think
    2: 12-78 (Special Issue on "Connectionism versus Symbolism" D.M.W.
    Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

1. Roitblat's (2001) understanding of all the elements in this discussion differs so radically from my own that space forces me to leave the reader to adjudicate our difference. All I can do is itemize some of our most prominent differences:

    i. For me, obeying differential equations is physics and
    implementing symbol manipulations is computation. They are not the
    same kind of thing, and indeed, the difference between them, the
    hardware/software distinction, is what most of this discussion is
    about. (Nor, as far as I can see, does this have anything to do
    with the psychologists' hoary example [which is often cited to
    illustrate one insight or another, but has not yet led to any
    concrete results] that the dog [somehow] correctly estimates the
    trajectory of the frisby without doing a conscious calculation.)

    ii. Nor, from the equally venerable platitude that continuous
    signals can be approximated arbitrarily closely by discrete ones,
    does it follow that virtual objects are approximately real: Tighten
    the approximation as close as you like, a simulated waterfall will
    never be wet. Roitblat seems to conflate discrete approximation of
    continuous signals -- a transducer function -- with systematically
    interpretable symbolic representation (see Harnad 1987).

    iii. Roitblat seems to think grounding has something to do with
    "specifying the premises of formal arguments." I cannot see this is
    it all. It has to do with causally connecting symbols to what they
    are about directly and autonomously, rather than through the
    mediation of an external interpretation: My candidate is causal
    grounding via whatever internal resources it takes to make a robot
    pass the TTT.

    iv. Roitblat seems to be using the words 'syntax', 'systematicity',
    and especially 'semantics' very differently from the way I do.
    I think my usage is more standard (e.g., see Fodor & Pylyshyn
    1988), but let the reader judge.

    v. We also differ in our construal of the Chinese Gym and Chinese
    Room Arguments, for which I can only refer the reader to the target
    article and the original sources (Searle 1980, Harnad 1989 Harnad
    1990) for comparison.

REFERENCES

Fodor, J., and Pylyshyn, Z. (1988) 'Connectionism and cognitive architecture: A critical analysis'. Cognition, 3--71.

Harnad, S. (1987) (ed.) Categorical Perception: The Groundwork of Cognition. New York: Cambridge University Press. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/71/index.html http://cogprints.soton.ac.uk/documents/disk0/00/00/15/72/index.html

Harnad, S. (1989) 'Minds, Machines and Searle'. Journal of Theoretical and Experimental Artificial Intelligence 1: 5--25. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/73/index.html

Harnad, S. (1990) 'The Symbol Grounding Problem'. Physica D, 335--346. http://cogprints.soton.ac.uk/documents/disk0/00/00/06/15/index.html

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Roitblat, H.L. (2001) Computational grounding. PSYCOLOQUY 12(058) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.058

Searle, J.R. (1980) 'Minds, brains and programs'. In: Behavioral and Brain Sciences 3: 417--424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html


Volume: 12 (next, prev) Issue: 059 (next, prev) Article: 26 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: