Stevan Harnad (2001) Title to Come. Psycoloquy: 12(061) Symbolism Connectionism (28)

Volume: 12 (next, prev) Issue: 061 (next, prev) Article: 28 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(061): Title to Come

TITLE TO COME
Reply to Searle on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom
http://www.cogsci.soton.ac.uk/~harnad/

harnad@cogsci.soton.ac.uk

Abstract

Abstract to come

1. Agreement is boring. New ideas arise from challenges to current ones. Necessity is the mother of invention. So it is with relief that I see that Searle and I have plenty to disagree about:

I. SYNTAX VS. SEMANTICS

2. No, it wasn't obvious at all that a dynamic implementation of a purely syntactic system could not generate semantics. In fact, nothing is obvious in the area of semantics. Grandma 'knew' all along that computers couldn't be thinking, Searle can't imagine how a gymful of boys or a robot could be thinking, I can't imagine how a lump of neurons could (and Father O'Grady, with a benign smile, knows they couldn't, if they hadn't had some help). Nothing faintly obvious here. So computationalism was worth a go -- until some of us started thinking about it (in no small measure thanks to Searle 1980), and the rest is still history in the making. But the force of Searle's argument certainly is not that it's obvious that there's no way to get from syntax to semantics; surely there's a bit more to it than that. Besides, I'll wager that as obvious it is that there's no way to get semantics from syntax, it'll be just as obvious that you can't get it from any other candidate that's clearly enough in focus so you can give it a thorough lookover. That's what's called the mind/body problem.

II. MIND AND MEANING

3. Let me state something pretty plainly, something that Searle (e.g. 1990) has only been making tentative gestures toward (although the relevance and credibility of his testimony in the Chinese room -- to the effect that he does not understand Chinese -- relies on it completely): There's only one kind of meaning: the kind that my thoughts have when I think something, say, 'the cat is on the mat,' and in thinking it, I have something in mind. Put even more bluntly, there's something it's like to think that the cat is on the mat, and the kind of thing that that is like is the essential feature of thinking, and meaning. Take time to mull it over. I'm saying that only minds have meaningful states, and that their meaningfulness is derived entirely from their subjective quality. That's intrinsic semantics. Everything else, if it's interpretable as meaning anything at all, is just extrinsic semantics, derived intentionality, or what have you -- as in the pages of a book, the output of a computerized dictionary or the portent of a celestial configuration. If this is true, it is bad news for 'unconscious thoughts,' worse news for 'unconscious minds' and even worse news for systems in which there is nobody home at all (as opposed to just someone sleeping) if such systems nevertheless aspire to have intrinsic semantics. Having extrinsic semantics just means being 'interpretable as if it were meaningful' by a system that has intrinsic semantics.

III. GROUNDING AND MEANING

4. Is there a third possibility? Can something be more than just 'interpretable as if it meant X' but less than a thought in a conscious mind? This is the mind/modeller's counterpart of the continuum hypothesis or P=NP, but it is both empirically and logically undecidable. The internal states of a grounded TTT system are not just formally interpretable as if they meant what they mean; the system itself acts in full accordance with the interpretation. Causal interaction with the objects that the symbols are interpretable as being about is not just syntax any more; syntax is just the formal relations among the symbols. But is that enough to guarantee that the semantics are now intrinsic? Or is grounded semantics just a 'stronger' form of extrinsic semantics?

IV. ONTOLOGY AND EPISTEMOLOGY

5. I don't make the ontic/epistemic confusions Searle thinks I make (I am kept too much on my toes pointing them out in others!). I am fully aware that not only the TT, but the TTT and even the TTTT are incapable of guaranteeing the presence of mind, and hence intrinsic meaning. But I'm also aware that the T-hierarchy is not just a series of behavioristic digressions from the correct empirical path, they are the empirical path; in fact, the TTTT exhausts the empirical possibilities (Harnad 1992; 1994). Searle himself is an advocate of the TTTT. He can't imagine settling for less. Yet he admits that we only want relevant TTTT powers. How are we to know which ones those are?

6. Let us admit that we're doing reverse engineering rather than 'basic science' and hope that the constraint of finding out what is needed to make a system that can do everything the brain can do will allow us to pick out its relevant powers. No guarantees, of course, but worrying too much about the outcome is tantamount to believing that (1) TTT-indistinguishable Zombies could have made it in the world just as successfully as we could, but we just don't happen to be Zombies (but then how could evolution tell the difference, favoring us, since it's not a mind-reader either?) that and (2) the degrees of freedom for successfully building TTT-scale systems are large enough to admit radically different solutions, some Zombies and some not. I think it is more likely that the TTT is just the right relevance filter for the TTTT. Otherwise we're stuck with modelling of lot of what might be irrelevant TTTT properties.

V. IS THE OTHER-MINDS PROBLEM IRRELEVANT?

7. As a form of skepticism -- worrying because we can't be sure other people have minds -- the other-minds problem is not particularly useful. But it is unavoidable when it comes to empirical work on other organisms, artificial mind-modelling or the brain itself. The question comes up naturally: How are we to ascertain whether or not this system has a mind? There are no guarantees, but there are some 'dead end' signs (like the Chinese Room Argument), and, one hopes, some positive guides too, such as the TTT and groundedness. By Searle's lights, there is only one: the TTTT.

VI. IS TRANSDUCTION UNMOTIVATED 'SPECULATIVE NEUROPHYSIOLOGY'

8. I think there is plenty of evidence that a large portion of the nervous system is devoted to sensory and motor transduction and their multiple internal analog projections (e.g. Chamberlain & Barlow, 1982; Jeannerod, 1994). Transduction is also motivated a priori by the logical requirements of a TTT robot, the real/virtual robot/world distinction, and immunity to the Chinese Room Argument. Besides, it's no kind of neurophysiology if one's empirical constraint is the TTT rather than the TTTT, as mine is.

9. A few loose ends:

    1.Contrary to Searle's suggestion, there is (of course) a causal
    connection between the hardware of a machine and the software it is
    executing; it's just that those physical details are not relevant
    to the computation, and the causal connection is the wrong kind if
    a mind was what one was hoping to implement. (I think this is the
    same conclusion Searle wanted to draw.)

    2.The cocaine example is a red herring, because nets are not being
    proposed as models for pharmacological function but for
    physiological function. But the gym example continues to be just a
    caricature rather than an argument.

    3.My hybrid grounding program is not committed to computationalism
    (I would be content to see most of the cognitive groundwork done
    nonsymbolically), but I do think the internal substrate of language
    will turn out to have something symbolic about it. Besides, the
    Chinese Room Argument and the Symbol Grounding Problem show only
    that cognition can't all be just computation, not that cognition
    can't be computation at all. On the other hand, it's not clear
    whether a grounded symbol system, with it's second layer of analog
    constraints, is still really much of a symbol system, in the formal
    syntactic sense, at all.

REFERENCES

Chamberlain, S.C. & Barlow, R.B. (1982) Retinotopic organization of lateral eye input to Limulus brain. Journal of Neurophysiology 48: 505-520.

Harnad, S. (1992) 'Connecting Object to Symbol in Modeling Cognition'. In: Clarke, A., and Lutz, R. (eds), Connectionism in Context. Springer Verlag. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/83/index.html

Harnad, S, (1994) 'Does the Mind Piggy-Back on Robotic and Symbolic Capacity?' To appear in: H. Morowitz (ed.) The Mind, the Brain, and Complex Adaptive Systems. http://cogprints.soton.ac.uk/documents/disk0/00/00/15/94/index.html

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Jeannerod, M. (1994) The representing brain: neural correlates of motor intention and imagery. Behavioral and Brain Sciences 17(2). http://www.bbsonline.org/documents/a/00/00/05/35/index.html

Searle, J.R. (1980) 'Minds, brains and programs'. In: Behavioral and Brain Sciences 3: 417-424. http://www.bbsonline.org/documents/a/00/00/04/84/index.html

Searle, J.R. (1990) Is the brain's mind a computer program?. Scientific American 262: 26-31. http://cogsci.soton.ac.uk/~harnad/Papers/Py104/searle.comp.html

Searle, J. R. (2001) The Failures of Computationalism. PSYCOLOQUY 12(060) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.060


Volume: 12 (next, prev) Issue: 061 (next, prev) Article: 28 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: