Stevan Harnad (2001) Title to Come. Psycoloquy: 12(063) Symbolism Connectionism (30)

Volume: 12 (next, prev) Issue: 063 (next, prev) Article: 30 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(063): Title to Come

Reply to Searle on Harnad on Symbolism-Connectionism

Stevan Harnad
Department of Electronics and Computer Science
University of Southampton
Highfield, Southampton
SO17 1BJ
United Kingdom


The logical consequence of the Chinese Room Argument (Searle 2001) is that syntax (computation, symbol-manipulation) by itself is not sufficient to cause/constitute semantics (conscious understanding of the symbol meaning by the system itself). But syntax inside a robot is no longer syntax by itself. Sensorimotor-transducers-plus-syntax constitute a larger candidate system. The brain is of course sufficient to cause/constitute semantics, but it is not clear which of the brain's many properties are necessary or relevant for its power to cause/constitute semantics. The brain's power to pass the Total Turing Test (TTT) is likely to be necessary (though not necessarily sufficient) to cause/constitute semantics. Hence any system with the power to pass the TTT provides empirical evidence as to what properties are likely to be necessary and relevant to cause/constitute semantics (but the TTT is still no guarantor). Sensorimotor transduction and/or connectionist networks may or may not be sufficient either to pass the TTT or to cause/constitute semantics, but they are certainly sufficient to escape the logical consequences of the Chinese Room Argument.

    REPRINT OF: Harnad, S. (1993) Harnad's response to Searle. Think 2:
    12-78(Special Issue on "Connectionism versus Symbolism" D.M.W.
    Powers & P.A. Flach, eds.).


	    "What does the genuine Chinese speaker have that I in the
	    Chinese room do not have?... [I] am manipulating a bunch
	    of symbols, but the Chinese speaker has more than...
	    symbols, he knows what they mean."

What one has that the other doesn't have is conscious understanding. This is the gist of Searle's (1980) Chinese Room Argument. A computer program can pass the Turing (1964) Test in Chinese. We infer that the computer executing the program (System I) must be understanding Chinese. Searle executes the very same computer program (System II), but without understanding Chinese. We know that the physical details of the systems that implement a program are irrelevant to the power of computation. The only thing that is relevent is that all implementing systems must be executing the same program (i.e., manipulating the symbols according to the same symbol-manipulation rules), and Systems I and II are. Yet Searle (System II) is not understanding. Then, by the same token, neither is the computer (System I). Syntax (rule-based symbol-manipulation) is not sufficient to cause/constitute semantics (conscious understanding of the meaning of the symbols).

	    "Why can't I in the Chinese room also have a semantics?
	    Because all I have is a program and a bunch of symbols, and
	    programs are defined syntactically in terms of the
	    manipulation of symbols. The Chinese room shows what we
	    knew all along: syntax by itself is not sufficient for

Correct. But what we may have known all along (although we needed Searle's thought-experiment to remind us) is that mindless symbol manipulation does not cause or constitute mindful understanding of the meaning of the symbols. Because of the "other-minds" problem (Harnad XXXX), we can never be sure which or whether any other system than ourselves has a mind or feels anything, be it pain or meaning. The only way to be sure is to be the other system, which is something we normally cannot do; we can only be the system we are. But in the special case of computation, which is defined as implementation-independent symbol manipulation, Searle reminds us that we can be the other system too, and check for ourselves. A computer program, "software," can be physically implemented on countless physical "hardwares," but all the implementing systems are equivalent, computationally (or syntactically) speaking, as long as they are implementing the same program. The hardware details and differences are irrelevant.

This is the central fact about computation that Searle exploits in his Chinese Room Argument. For by becoming himself one of the implementations of the computer program, he is in a position to say what it is that any and every one of the other implementations do or do not have in mind -- if they have it purely in virtue of implementing that computer program. (Obviously some systems might have other properties, properties of their hardware or of other larger systems of which they are merely a part, which Searle cannot implement, because they are not implementation-independent properties. In that case, all bets are off, Searle's "Periscope" (Harnad XXXX) doesn't work, and we are back to the old other-minds problem. Searle's Periscope can only see through purely computational properties; in other words, it only works if Searle can be the whole system that is causing/constituting the understanding of Chinese. If Chinese-understanding is caused/constituted by a computer executing a program plus a geiger-counter, Searle's argument cannot show that such a "system" does not understand, any more than it can show that a rock does not understand. Searle can be another implementation of the computer program, but not of the computer-plus-geiger-counter.)

So, yes, the emphasis is on the "by itself" in the logical conclusion that "the Chinese room shows that syntax by itself is not sufficient for semantics." It is only if and when Searle can do it all -- be the whole system -- that his (true) introspective report that he does not understand Chinese successfully penetrates the other-minds barrier. When, in contrast, he is merely part of the system, rather than the whole system, he can draw no logical conclusion at all. It is neither and argument nor the logical conclusion of one, for example, to say that "it seems to me about as unlikely that this gymnasium full of (American) boys constitutes a Chinese-understanding system as that that rock does." It does indeed seem unlikely, but then it seems just as perplexing and unlikely that a lump of flesh should constitute an understanding system...

	    "confusing epistemology with ontology...  confusing `How do
	    we know?' with `What it is that we know when we know?'...
	    is enshrined in the Turing Test (TT) (and, as we shall see,
	    it is also enshrined in Harnad's Total Turing Test (TTT)).

One cannot speak for others (though I doubt that Turing mixed these things up either), but speaking for myself: I have always been quite fastidious about the difference, not only between the ontic and the epistemic, but between the sufficient and the necessary, the necessary and the probable, the decidable and the undecidable, and what either the Turing Test or the Chinese Room Argument do or do not, can or cannot, show:

Having the capacity to pass the TT or TTT is certainly neither the same as, nor proof of, having conscious understanding. Only Cartesian introspection, performed in its own "1st-person" case, is a guarantor of any system's being conscious. So being able to pass the TT or TTT is not a sufficient condition for consciousness, as far as anyone knows; nor is it a necessary condition (as many mentally and physically disabled but perfectly conscious people would be happy to attest, if they could).

Being able to pass the TT or TTT, however, is a condition that ordinarily makes it highly probable that the passer is conscious: The only ones who can do it so far are people, like the rest of us. Having a normal human brain seems to be a sufficient condition for both consciousness and TT/TTT power, so far. Is it a necessary condition? Who knows?

So, is conscious understanding probable in an artificial system that can pass TT? Yes. Even if the system is just a computer, and passing only in virtue of executing the right program? We would have thought so at first, until Searle's argument. His argument shows that it is highly improbable. (To do so, merely manipulating meaningless symbols would have to be enough to cause a person to have a second, conscious Chinese-understanding mind, one into which the first mind had no introspective access). Why improbable? Because Searle can be the whole system without understanding Chinese. But not impossible.

So it is possible to have a mind without being able to pass the TT, possible not to have a mind while being able to pass the TT, and possible to have a mind purely because one is executing the right computer program, despite Searle's argument. So neither Turing's Test nor Searle's Argument is a proof.

But Searle's Argument does show why it very unlikely that any implementation of a TT-passing computer program (if such a program is possible at all) would understand. What makes it so unlikely? Searle's (and our) Cartesian certainty that he doesn't understand Chinese (under those conditions). Could we have drawn the same conclusion from our Cartesian certainty of not understanding if we were only PART of the system, rather than all of it, as in the case of a TTT-passing robot? The answer is, No. One might still feel that it is just as improbable that a robot could have conscious understanding as that a rock could, and one might be right. But that is just subjective probability. It is not an argument.

Can one be certain that people, real people with real brains (other than oneself) understand? or that rocks do not? No. But the probability is so overwhelming that there is no point considering otherwise -- in the case of real people with real brains (and the case of rocks). We are not considering those cases here, however, but hypothetical cases of artificial TT- and TTT-passers. Is the absence of a biological brain, but the presence of TT- or TTT-capacity sufficient epistemic grounds for drawing any ontic conclusions? In the case of TT passed by the implementation of an implementation-independent symbol system, yes: Searle's argument implies that there will be no conscious understanding under these conditions. In the case of TT passed by non-implementation-independent means, or the case of TTT, no; Searle's argument does not apply. The usual epistemic barrier posed by the other-minds problem prevails in such cases.

	    "[A]ny system which actually had cognition would have to
	    have internal causal powers equivalent to those of the
	    brain. These causal powers might be achieved in some other
	    medium... but in real life we know that the biology of
	    cognition is likely to be as biochemically limited as, say,
	    the biology of digestion. [A]n artificial brain... must
	    duplicate -- and not merely simulate or model -- the causal
	    powers of the real brain... [S]yntax is not enough to do
	    the job."

	    Harnad's robot... TTT... argument looks like a variant of
	    the robot reply that I answered in my original target
	    article in BBS, (Searle, 1980).

	    10. Harnad argues that a robot that could pass TTT in
	    virtue of sensory and motor transducers would not merely be
	    interpretable as having appropriate mental states but would
	    actually have such mental states.  

	    imagine a really big robot whose brain consists of a...
	    computer... in the robot's cranium. Now replace the
	    ... computer with me.

	    The robot has all of the sensory and motor transducers it
	    needs to coordinate its input with its output. And I,
	    Searle, in the Chinese room am doing the coordinating, but
	    I know nothing of this. 

	    the robot's transducers... convert optical stimuli into
	    Chinese symbols....I operate on these symbols... and send
	    symbols to transducers that cause motor
	    Chinese, "I just saw a big fat Buddha"...
            I didn't see anything, and neither did the robot.

	    Harnad thinks... that... [u]nless [the transducers] are
	    part of me... I am "not implementing the whole system, only
	    part of it." suppose that I am totally blind because of
	    damage to my visual cortex, but my photoreceptor cells work
	    perfectly as transducers. Then let the the robot use my
	    photoreceptors as transducers...  What difference does it
	    make? None at all as far as getting the causal powers of
	    the brain to produce vision. I would still be blindly
	    producing the input output functions of vision without
	    seeing anything.

	    Will Harnad insist in the face of this point that I am
	    still not implementing the whole system? If he does then
	    the thesis threatens to become trivial. 

	    Syntax is not enough to guarantee mental content, and
	    syntax that is the output of transducers is still just
	    syntax. The transducers don't add anything to the syntax
	    which would in any way duplicate the quite specific causal
	    powers of the brain to produce such mental phenomena as
	    conscious visual experiences.

	    I think the deep difference between Harnad and me comes
	    when he says that in cases where I am not implementing the
	    whole system, then, "as in the Chinese gym, the System
	    Reply would be correct." But he does not tell us how it
	    could possibly be correct... But the decisive objection to
	    the System Reply is one I made in 1980: If I in the Chinese
	    Room don't have any way to get from the syntax to the
	    semantics then neither does the whole room; and this is
	    because the room hasn't got any additional way of
	    duplicating the specific causal powers of the Chinese brain
	    that I do not have. And what goes for the room goes for the

	    to justify the System Reply one would have to show: (1) how
	    the system gets from the syntax to the semantics [and] (2)
	    how the system has the specific internal causal powers of
	    the brain.

	    the mistake of the TTT is exactly the same as the mistake
	    of the TT: it... confuse[s] epistemology with ontology.
	    Just as a system can pass the Turing Test without
	    [understanding], so a system can pass the Total Turing Test
	    and still not have [understanding]. Behavior plus syntax is
	    not constitutive of [understanding], and for the same
	    reason transduction plus syntax is not constitutive of
	    [understanding].... where the ontology -- as
	    opposed to the epistemology -- of the mind is concerned,
	    behavior is irrelevant.

	    Will connectionism solve our problems? ...  If you build a
	    net [with] some electrochemical features of physical
	    architectures, then it becomes an empirical question...
	    whether [these] duplicate and not merely simulate actual
	    causal powers of actual human brains.

	    For example... [will] parallel [and/or] distributed
	    [processes] ... duplicates the causal powers of actual
	    human neuronal systems? As a claim in neurobiology the idea
	    seems quite out of the question, as you can see if you
	    imagine the same net implemented in the Chinese Gym.
	    Unlike the human brain, there is nothing in the gym that
	    could either constitute or cause cognition.

	    Harnad, by the way, misses the point of the Chinese
	    gym. He thinks it is supposed to answer the systems reply.
	    But that is not the point at all.

	    If we define the nets in terms of physical features of
	    their architecture then we have left the realm of
	    computation and are now doing speculative neurobiology.
	    Existing nets are nowhere near to having the causally
	    relevant neurobiological properties.


Chamberlain, S.C. & Barlow, R.B. (1982) Retinotopic organization of lateral eye input to Limulus brain. Journal of Neurophysiology 48: 505-520.

Harnad, S. (1992) 'Connecting Object to Symbol in Modeling Cognition'. In: Clarke, A., and Lutz, R. (eds), Connectionism in Context. Springer Verlag.

Harnad, S, (1994) 'Does the Mind Piggy-Back on Robotic and Symbolic Capacity?' To appear in: H. Morowitz (ed.) The Mind, the Brain, and Complex Adaptive Systems.

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034)

Jeannerod, M. (1994) The representing brain: neural correlates of motor intention and imagery. Behavioral and Brain Sciences 17(2).

Searle, J.R. (1980) 'Minds, brains and programs'. In: Behavioral and Brain Sciences 3: 417-424.

Searle, J.R. (1990) Is the brain's mind a computer program?. Scientific American 262: 26-31.

Searle, J. R. (2001) The Failures of Computationalism. PSYCOLOQUY 12(060)

Volume: 12 (next, prev) Issue: 063 (next, prev) Article: 30 (next prev first) Alternate versions: ASCII Summary