John R. Searle (2001) The Failures of Computationalism: ii. Psycoloquy: 12(062) Symbolism Connectionism (29)

Volume: 12 (next, prev) Issue: 062 (next, prev) Article: 29 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 12(062): The Failures of Computationalism: ii

THE FAILURES OF COMPUTATIONALISM: II
Commentary on Harnad on Symbolism-Connectionism

John R. Searle
Department of Philosophy
University of California
Berkeley, CA 94720, USA

searle@cogsci.Berkeley.edu

Abstract

Syntax is not sufficient to cause semantics; the brain is sufficient.

    LONGER VERSION OF: Searle, J. R. (1993) The failures of
    computationalism.  Think 2: 12-78 (Special Issue on "Connectionism
    versus Symbolism" D.M.W. Powers & P.A. Flach, eds.).
    http://cwis.kub.nl/~fdl/research/ti/docs/think/2-1/index.stm

I. THE POWER IN THE CHINESE ROOM

1. Harnad (2001) and I agree that the Chinese Room Argument (Searle 1980) deals a knockout blow to Strong AI, but beyond that point we do not agree on much at all. So let's begin by pondering the implications of the Chinese Room. The Chinese Room shows that a system, me for example, could pass the Turing Test for understanding Chinese, for example, and could implement any program you like and still not understand a word of Chinese. Now, why? What does the genuine Chinese speaker have that I in the Chinese Room do not have? The answer is obvious. I, in the Chinese room, am manipulating a bunch of formal symbols; but the Chinese speaker has more than symbols, he knows what they mean. That is, in addition to the syntax of Chinese, the genuine Chinese speaker has a semantics in the form of meaning, understanding, and mental contents generally.

2. But, once again, why? Why can't I in the Chinese room also have a semantics? Because all I have is a program and a bunch of symbols, and programs are defined syntactically in terms of the manipulation of the symbols. The Chinese room shows what we should have known all along: syntax by itself is not sufficient for semantics. (Does anyone actually deny this point, I mean straight out? Is anyone actually willing to say, straight out, that they think that syntax, in the sense of formal symbols, is really the same as semantic content, in the sense of meanings, thought contents, understanding, etc.?)

3. Why did the old time computationalists make such an obvious mistake? Part of the answer is that they were confusing epistemology with ontology, they were confusing "How do we know?" with "What it is that we know when we know?" This mistake is enshrined in the Turing Test(TT). Indeed this mistake has dogged the history of cognitive science, but it is important to get clear that the essential foundational question for cognitive science is the ontological one: "In what does cognition consist?" and not the epistemological other minds problem: "How do you know of another system that it has cognition?"

4. The feature of the Chinese Room that appeals most to Harnad is that by allowing the experimenter to be the entire system it eliminates any "other minds problem". Since I know that I don't understand Chinese, solely in virtue of implementing the program and passing the TT, I therefore know that those conditions are not by themselves sufficient for cognition, regardless of the fact that other systems that satisfy those conditions may behave just as if they did have cognition.

5. I think this is not the most important feature of the Chinese Room, but so far, there is no real disagreement between Harnad and me. The disagreement comes over what to make of the insight given us by the Chinese Room. Harnad wants to persist in the same line of investigation that the Chinese Room was designed to eliminate and see if he cannot find some version of the computer theory of the mind that is immune to the argument. In short, he wants to keep going with the epistemology and the computation. He wants to get a better version of TT, the TTT. Harnad takes the Other Minds Problem seriously. He takes the Systems Reply seriously, and he wants to answer the Chinese Room Argument by inventing cases where the experimenter can't be the whole system. Because the Chinese Room Argument relies on the multiple realizability of computation, he thinks the way to answer it is to add to computation a feature which is not multiply realizable, transduction. But this move is not based on an independent study of how the brain causes cognition; rather, it is ad hoc and ill motivated, as I will later argue.

6. I, on the contrary, think that we should not try to improve on the TT or get some improved versions of computationalism. TT was hopeless to start with because it confused sameness of external behavior with sameness of internal processes. No one thinks that because an electric engine produces the same power output as a gas engine, that they must have the same internal states. Why should it be any different with brains and computers? The real message to be found in the Chinese Room is that this whole line of investigation is misconceived in principle and I want now to say why. Because time and space are short, and because my debate with Stevan is a conversation among friends, I will leave out the usual academic hedges and qualifications, and just state my conclusions. (For detailed argument, the reader will have to consult The Rediscovery of the Mind, 1992).

7. (1) The other minds problem is of no special relevance or interest to cognitive science. It is a philosophers' problem exemplifying skepticism in general but it is not a special problem for cognitive science. So we should not waste our time trying to find tests such as TT, TTT, that will addresses the other minds problem. etc.

8. Cognitive science starts with the fact that humans have cognition and rocks don't. Cognitive scientists do not need to prove this any more than physicists need to prove that the external world exists, or astronomers need to solve Hume's problem of induction before explaining why the sun rises in the East. It is a mistake to look for some test, such as the TT, the TTT, etc., which will "solve the other minds problem" because there is no such problem internal to Cognitive Science in the first place.

9. (2) Epistemological problems are of rather little interest in the actual practice of science, because they always have the same solution: Use any weapon that comes to hand and stick with any weapon that works. The subject matter of cognitive science concerns human beings and not rocks and computers, and we have a large variety of ways for figuring out what human beings are thinking, feeling, knowing, etc.

10. (3) Where the ontology -- as opposed to the epistemology -- of the mind is concerned, behavior is irrelevant. The Chinese Room Argument demonstrated this point. To think that cognitive science is somehow essentially concerned with intelligent behavior is like thinking that physics is essentially a science of meter readings. (Chomsky's example). So we can forget about the TT and TTT, etc. External behavior is one epistemic device among others. Nothing more.

11. (4) Computation has the same role in cognitive science that it has in any other science. It is a useful device for simulating features of the real domain we are studying. Nothing more. The idea that it might be something more is a mixture of empirical and conceptual confusion. (Again, see Searle 1992 for details.)

12. (5) The real domain we are studying includes real, intrinsic cases of mental states and processes, such as perceiving, thinking, remembering, learning, talking, understanding, etc.; and all of these have mental contents. The problem for cognitive science is not symbol grounding, but symbol meaning and symbol content in general. Cognitive science is concerned with the actual thought contents, semantic contents, experiences, etc., that actual human beings have. Either Harnad's "grounding" is to be taken as synonymous with "content", in which case, why use the notion of grounding? Or it isn't, in which case, it's irrelevant. All of these mental processes - thinking, talking, learning, etc. - are either conscious or potentially so. It is best to think of cognitive science as the science of consciousness in all of its varieties.

13. (6) All cognitive states and processes are caused by lower level neuronal processes in the brain. It is a strict logical consequence of this point that any artificial system capable of causing cognition would have to have the relevant causal powers equal to those of the brain. An artificial brain might do the job using some other medium, some non carbon based system of molecules for example; but, whatever the medium, it must be able to cause what neurons cause. (Compare: airplanes do not have to have feathers in order to fly, but they do have to duplicate the causal power of birds to overcome the force of gravity in the earth's atmosphere.)

14. So in creating an artificial brain we have two problems, first, anything that does the job has to duplicate and not merely simulate the relevant causal powers of real brains (this is a trivial consequence of the fact that brains do it causally); and second, syntax is not enough by itself enough to do the job (this we know from the Chinese Room).

15. Because we now know so little about how the brain actually works, it is probably a waste of time at present to try to build an artificial brain that will duplicate the relevant causal powers of real brains. We are almost bound to think that what matters is the behavioral output (such as "robotic capacity") or some other irrelevancy, and I think this is one source of Harnad's TTT.

16. (7) Once you see that external behavior is irrelevant to the ontology of cognition, Harnad's TTT proposal amounts to a piece of speculative neurophysiology. He thinks that if only we had certain kinds of analog transducers, then those plus computation and connectionist nets would equal the causal powers of real brains. But why does Harnad think that? If you know anything about the brain, the thesis would seem literally incredible. There is nothing per se wrong with speculative neurophysiology, but it needs to have a point. Once you realize that the brain is a very specific kind of biological organ, and that external behavior of the organism is in no way constitutive of the internal cognitive operations, then there seems to be little point to the type of speculative neurophysiology exemplified by the TTT.

17. Harnad is very anxious to insist that TTT is not refuted by the Chinese Room. Maybe so, but who cares? If it is unmotivated and neurobiologically implausible what point does it have? Its only motivation appears to be a kind of extension of the behaviorism that was implicit in the TT. That is, Harnad persists in supposing that somehow or other behavior (robotic capacity) is ontologically and not merely epistemologically relevant.

18. (8) For reasons that are mysterious to me, Harnad takes the Systems reply seriously. He says, that that in cases where I am not implementing the whole system, then, "as in the Chinese gym, the System Reply would be correct." But he does not tell us how it could possibly be correct. According to the System Reply, though I in the Chinese Room do not understand Chinese or have visual experiences, the whole system understands Chinese, has visual experiences, etc. But the decisive objection to the System Reply is the one I made in 1980: If I in the Chinese Room don't have any way to get from the syntax to the semantics then neither does the whole room; and this is because the room hasn't got any additional way of duplicating the specific causal powers of the Chinese brain that I do not have. And what goes for the room goes for the robot.

19. In order to justify the System Reply one would have to show (a) How the system gets from the syntax to the semantics. And in order to show that one would have to show (b) How the system has the relevant specific internal causal powers of the brain. Until these two conditions are met, the System Reply is just hand waving.

20. I believe the only plausibility of the system reply comes from a mistaken analogy. It is a familiar point that a system made of elements may have features caused by the behavior of the elements that are not features of individual elements. Thus the behavior of the H20 molecules causes the system composed of those molecules to be in a liquid state even though no individual molecule is liquid. More to the point, the behavior of neurons can cause a system made of those neurons to be conscious even though no individual neuron is conscious. So why can't it be the same with the computational system? The analogy breaks down at a crucial point. In the other cases the behavior of the elements causes a higher level feature of the system. But the thesis of Strong AI is not that the program elements cause some higher level feature, rather the right program that passes TT is supposed to constitute cognition. It does not cause it as a byproduct. Indeed the symbols in the implemented program don't have any causal powers in addition to those of the implementing medium. The failure of the analogy between computational systems and other systems which really have emergent properties comes from the fact that genuine emergent properties require causal relations between the lower level elements and the higher level emergent property and these causal powers are precisely what is lacking, by definition, in the computational models.

II. CONNECTIONISM TO THE RESCUE?

21. Since connectionism looms large in Harnad's account, and since it has received a great deal of attention lately, I will devote a separate section to it.

22. How we should assess connectionism depends on which features of which nets are under discussion and which claims are being made. If the claim is that we can simulate, though not duplicate, some interesting properties of brains on connectionist nets, then there could be no Chinese Room style of objections. Such a claim would be a connectionist version of weak AI. But what about a connectionist Strong AI? Can you build a net that actually had, and did not merely simulate, cognition?

23. This is not the place for a full discussion, but briefly: If you build a net that is molecule for molecule indistinguishable from the net in my skull, then you will have duplicated and not merely simulated a human brain. But if a net is identified purely in terms of its computational properties then we know from familiar results that any such properties can be duplicated by a Universal Turing machine. And Strong AI claims for such computations would be subject to Chinese Room style refutation.

24. For purposes of the present discussion, the crucial question is: In virtue of what does the notion "same connectionist net" identify an equivalence class. If it is in virtue of computational properties alone then a Strong AI version of connectionism is still subject to the Chinese Room Argument, as Harnad's example of the three rooms illustrates nicely. But if the equivalence class is identified in terms of some electro chemical features of physical architectures, then it becomes an empirical question, one for neurobiology to settle, whether the specific architectural features are such as to duplicate and not merely simulate actual causal powers of actual human brains. But, of course, at present we are a long way from having any nets where such questions could even be in the realm of possibility.

25. The characteristic mistake in the literature - at least such literature as I am familiar with - is to suggest that because the nets duplicate certain formal properties of the brain that somehow they will thereby duplicate the relevant causal properties. For example, the computations are done in systems that are massively parallel and so operate at several different physical locations simultaneously. The computation is distributed over the whole net and is achieved by summing input signals at nodes according to connection strengths, etc. Now will these and other such neuronally inspired features give us an equivalence class that duplicates the causal powers of actual human neuronal systems? As a claim in neurobiology the idea seems quite out of the question, as you can see if you imagine the same net implemented in the Chinese Gym. Unlike the human brain, there is nothing in the gym that could either constitute or cause mental states and processes.

26. There is no substitute for going through real examples, so let's take a case where we know a little bit about how the brain works. One of the ways that cocaine works on the brain to produce its effects is that it impedes the capacity of the synaptic receptors to reabsorb competitively a specific neurotransmitter, norepinepherine. So now let us simulate the formal features of this in the Chinese gym, and we can do it to any degree of precision you like. Let messenger boys in the gym simulate molecules of norepinepherine. Let the desks simulate post synaptic and presynaptic receptors. Introduce a bunch of wicked witches to simulate cocaine molecules. Now instead of rushing to the receptors like good neurotransmitters, the boys are pushed away by the wicked cocaine witches so they have to wander aimlessly about the floor of the gym waiting to be reabsorbed. Now will someone try to tell me that this causes the whole gym "as a system" to feel a cocaine high? Or will Harnad tell me that because of the Other Minds Problem and the Systems Reply, that I can't prove that the whole gym isn't feeling a cocaine high? Or will some Strong AI connectionist perhaps tell me we need to build a bigger gym? Neurobiology is a serious scientific discipline, and though still in its infancy it is not to be mocked. The Strong AI version of Connectionism is a mockery, as the Chinese Gym Argument illustrates. Harnad, by the way, misses the point of the Chinese Gym. He thinks it is supposed to answer the systems reply. But that is not the point at all.

27. The dilemma for Strong AI Connectionism can be stated succinctly: If we define the nets in terms of their computational properties, they are subject to the Chinese Room. Computation is defined syntactically and syntax by itself is not sufficient for mental contents. If we define the nets in terms of their purely formal properties, independently of the physics of their implementation, then they are subject to the Chinese Gym. It is out of the question for empirical reasons that the purely formal properties implemented in any medium whatever should be able to duplicate the quite specific causal powers of neuronal systems. If, finally, we define the nets in terms of specific physical features of their architecture, such as voltage levels and impedance, then we have left the realm of computation and are now doing speculative neurobiology. Existing nets were not even designed with the idea of duplicating the the causally relevant internal neurobiological properties. (Again, does anyone really doubt this?)

REFERENCES

Harnad, S. (2001) Grounding symbols in the analog world with neural nets -- A hybrid model. PSYCOLOQUY 12(034) http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?12.034

Searle, J.R. (1980) 'Minds, brains and programs'. In: Behavioral and Brain Sciences 3: 417-424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html http://www.bbsonline.org/documents/a/00/00/04/84/index.html

Searle, J.R. (1992) The Rediscovery of the Mind. Cambridge, MA: MIT Press.


Volume: 12 (next, prev) Issue: 062 (next, prev) Article: 29 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: