Christopher D. Green (2000) Is AI the Right Method for Cognitive Science?. Psycoloquy: 11(061) Ai Cognitive Science (1)

Volume: 11 (next, prev) Issue: 061 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(061): Is AI the Right Method for Cognitive Science?

IS AI THE RIGHT METHOD FOR COGNITIVE SCIENCE?
Target Article by Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo/

christo@yorku.ca

Abstract

It is widely held that the methods of AI are the appropriate methods for cognitive science. Fodor, however, has argued that AI bears the same relation to psychology as Disneyland does to physics. This claim is examined in light of the widespread but paradoxical acceptance of the Turing Test - a behavioral criterion of intelligence, among advocates of cognitivism. It is argued that, given the recalcitrance of certain deep conceptual problems in psychology, and disagreements concerning its basic vocabulary, it is unlikely that AI will prove to be very psychologically enlightening until some consensus is achieved.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
    REPRINT OF: Green, C. D. (1993). Is AI the right
    method for cognitive science? COGNOSCENTI: Bulletin of the Toronto
    Cognitive Science Society 1: 1-5.

    This target article is reprinted in PSYCOLOQUY in order to elicit
    further OPEN PEER COMMENTARY. The ten original commentaries and
    responses are also reprinted here. Continuing commentary is now
    invited. Qualified professional biobehavioural, neural or cognitive
    scientists should consult PSYCOLOQUY's Websites or send email
    (below) for Instructions if not familiar with format or acceptance
    criteria for commentaries (all submissions are refereed).

    PSYCOLOQUY is a refereed online journal of Open Peer Commentary
    sponsored by the American Psychological Association.

    To submit commentaries or target articles, or to seek information:

    EMAIL:      psyc@pucc.princeton.edu
    URLs:       http://www.princeton.edu/~harnad/psyc.html
                http://www.cogsci.soton.ac.uk/psyc

1. It is widely held that the methods of artificial intelligence (AI) constitute a legitimate - even the preferred - manner of conducting research in cognitive science (CS). This belief is due, in large part, to the primacy of computational functionalism (CF), currently the most influential framework for cognitive science. Functionalism--whether of the computational variety or not--holds that mental states are abstract functions that get us from a given input (e.g., sensation, thought) to a given output (e.g., thought, behavior). By itself, functionalism is not an adequate account, however. As Fodor (1981a) has pointed out, if the only requirement were the mapping of inputs on to outputs there would be no explanatory value in functionalism because the task could be accomplished trivially. For instance, if I wanted to explain how it is that people can answer questions on a wide array of topics, I could simply postulate the existence of a "question-answering function" which maps all questions on to their answers. Such an explanation would, of course, be completely vacuous. In order to make functionalism function, so to speak, one must put constraints on the sorts of functions that are to be considered reasonable as psychological theories.

2. The computational answer to this question is: allow only those functions that can be implemented on Turing machines. A Turing machine is, of course, not a machine at all, but rather an idealized model of a machine that can only read, write, and erase a finite number of symbol types situated in a specified order on a (potentially infinite) length of tape. It "decides" which symbols are to be read, written or erased according to rules given in a Turing machine table. Such tables are, in effect, (very) low level computer programs. By restricting one's functions to those implementable on a Turing machine, one rules out the positing of "question-answering functions" and the like, unless one can specify a Turing machine table that would enable a Turing machine actually to accomplish the task at hand. Of course, one could propose other sorts of functionalisms -logical behaviorism, for instance, was just such an alternative sort of functionalism (see Fodor, 1981b, pp. 9-10) - but the computer model seems to have the upper hand these days, primarily because it lends itself to realism about mental states, whereas behaviorism encourages mental instrumentalism, or even eliminativism.

3. If one is a committed CF-ist, for each cognitive process one wants to investigate it is often argued that one must develop a computer program (or at least show how one might in principle be developed) that can replicate the mental process in question. By hitching one's functionalism to one's computer one keeps the bogeyman of vacuous circularity at bay. Because such programs are simply instances of (or, at least, attempts at) AI, AI becomes the obvious method of choice for researchers in cognitive science.

4. Among advocates of computational functionalism, Jerry Fodor is one of the best-known and most influential. In a recent work, however, he has rejected AI as a creditable method of cognitive-scientific research. In response to an accusation by Dennett (1991, p. 91) that he believes the whole enterprise of Good Old Fashioned AI (GOFAI) to be ill-founded, and that this is inconsistent with his advocacy of CF, Fodor writes, "...[I]n a sense I do believe that the 'whole enterprise of GOFAI is ill-founded'. Not because it's got the wrong picture of the mind, however, but because it has a bad methodology for turning that picture into science. I don't think you do the science of complex phenomena by attempting to model gross observable variance. Physics, for example, is not the attempt to construct a machine that would be indistinguishable from the real world for the length of a conversation. We do not think of Disneyland as a major scientific achievement." (Fodor, 1991, p. 279)

5. It is the main aim of this paper to examine Fodor's claim against AI in the light of his advocacy of CF. I will argue that there are, indeed, several interesting parallels of the sort Fodor suggests, especially when AI is looked at in terms of its most widely celebrated criterion for success, the Turing test. More importantly, whereas there has long been consensus on what physics is supposed to explain, and what entities will be included in that explanation, there is little such consensus in psychology and, thus, a successful cognitive Disneyland would probably be even less useful to psychology than a physical Disneyland would be to physics.

6. The first task is to dispel some possible misinterpretations of the quotation above. One might think that because Fodor specifically cites GOFAI he is exempting connectionist AI from his argument. Nothing could be further from the truth. Fodor is a fierce detractor of connectionist models of mind. With Zenon Pylyshyn (1988) he has argued that they do not, or more precisely, are not inherently constrained to, exhibit the productivity, generativity, and systematicity characteristic of many mental activities, most notably language. In fact, at another point in the very reply to Dennett cited above, Fodor says, with regard to the ongoing debate with his primary connectionist adversary, Paul Smolensky, "As far as I can tell, the argument has gone like this: Fodor and Pylyshyn claimed that you can't produce a connectionist theory of systematicity. Smolensky then replied by not producing a connectionist theory of systematicity. Who could have foreseen so cunning a rejoinder?" (Fodor, 1991, p. 279). Thus, it cannot be claimed that Fodor's critique of GOFAI was intended to let connectionism off the hook. It was a response to a question specifically about GOFAI.

7. A second possible misconstrual is that Fodor is rejecting the "strong" view of computational functionalism. Recall that according to John Searle (1980, 1984), the thesis of "strong" AI is that symbolic manipulation of a syntactic sort JUST IS a cognitive process. Searle is pretty clear about who he has in mind when he discusses "strong" AI. He lists the giants of the field: Allen ("we have discovered that intelligence is just symbol manipulation") Newell, Herbert ("we already have machines that can literally think") Simon, Marvin ("we'll be lucky if the next generation of computers keep us around as house pets") Minsky, and John ("thermostats have beliefs") McCarthy (all quotes cited in Searle, 1984, pp. 29-30). In personal communication, Searle has said that early Putnam, as well as Fodor and Pylyshyn, are to be included in this group as well. Under the "weak" view of AI, the programs are just simulations of cognitive processes, rather than actual instantiations of them, and they bear exactly the same relation to psychology that computational simulations of weather systems bear to meteorology. No one for a moment believes them to be actual weather systems.

8. There is an interesting terminological ambiguity at work here that is crucial to the present issue. Searle speaks of strong and weak AI, and puts Fodor in the strong camp. Fodor, however, rejects AI, but subscribes to what might be called "strong CF". According to strong CF, or at least according to Fodor, the "right kind" of computational symbol manipulation is thought actually to instantiate a cognitive process, but it is doubtful that Fodor would be so liberal as to attribute true cognitive capacity to just any such computation (as, at least, McCarthy seems to). What counts as the "right kind" of computation, however, is never made very clear, or at least has not been fully worked out as yet. Thus, there is nothing in the quotation given at the beginning of this paper to indicate that Fodor has had a crucial change of heart on the question of the relation between computation and cognition. He is still a fully-fledged computationalist. He just doubts that the tools of program design will settle the important questions that the research program is faced with (e.g., How are the rules and results of cognition represented? What are intentionality and rationality? What, if anything, are consciousness and qualia?).

9. With this detritus cleared out of the way, the primary question that remains is whether or not, as Fodor claims, AI bears the same relation to psychology as Disneyland does to physics. At Disneyland, roughly speaking, various mechanical contrivances are put in place out of view that cause the apparent ground, the apparent river, the apparent animal, the apparent person, etc., to move as one would expect the real McCoy to. Two Disney-ish characteristics are of particular note here. First, there are no real grounds, rivers, animals, or people involved (at least in the parts with which we are here concerned). Second, for the most part, none of them are very convincing, except to the very young. So we have to imagine a sort of "ideal" Disneyland - perhaps something more akin to the Westworld of cinematic fame - in which the features of the ground, rivers, animals, people, etc. are indistinguishable from the real thing. Notice that what counts as indistinguishable depends on an implicit set of constraints about what sorts of investigations one is allowed to pursue. If one digs (deep enough) into the "ground", one finds the machine that causes the simulated earthquake. If one traces the "river" to its source, one finds a (probably very big) faucet. If one cuts open one of the "animals" or "people", one finds (perhaps) a mass of electronic circuitry rather than flesh and organs. So, we are forced to say that within certain constraints of normal action (where normal does not include digging too deep, following the river too far, or doing violence to the local "critters") the ideal Disneyland is indistinguishable from the real world.

10. One might be led by such considerations to ask, "If AI is to psychology as Disneyland is to physics, then what part of the AI-psychology relation corresponds to the constraints placed on the "investigator" of ideal Disneyland?" The question is pertinent, and leads us to consider Fodor's claim that the hypothetical machine "would be indistinguishable from the real world for the length of a conversation". The implicit reference is clearly to the Turing Test. In support of Fodor's position, Turing puts parallel constraints on the powers of his "interrogator" to decide which of the entities with which he is conversing is human, and which is a computer. The conversation must take place remotely and via teletype (to avoid the cognitive game being played from being given away by merely physical characteristics).

11. It is nothing short of paradoxical that the cutting edge of cognitivism - AI - would adopt so stringently behaviorist a test for their success. On at least one popular account of what happened in psychology in the 1950s and 1960s, behaviorists such as Skinner said, "There is no point in looking inside the black box. You will find nothing there of value," to which the incipient cognitivists (in large part the "artificial intelligentsia") replied, "Nonsense. To the degree we understand the contents of the black box, we will understand the real determinants of behavior rather than just the statistical abstractions offered by S-R psychology."

12. When the time came to judge whether or not AI had succeeded in producing AI, they resorted to the behaviorist tactic of restricting access to the inside of the black box, under the pretext that such access would give up the game. But it is not solely a matter of giving up the game. If I, playing the role of Turing's interrogator, am unable to distinguish between the computer and the human, it may not be because the two are indistinguishable, but because I don't know what questions to ask. Access to the innards of the machine would give up the game only in the sense that I would know how to show, as behaviorally as you might like, the computer to be a fraud. Not only is such access legitimate, it is in the very tradition that allowed cognitivism itself to bring down behaviorism. Cognitivism cannot deny such access, on pain of incoherence.

13. To extend the game paradigm to cover this point, if Turing worries for my integrity - that my own ego won't allow me to fail, given the truth about which of my conversational partners is the human and which is the computer - then let a person other than myself climb into the machine, find out what question would likely trip it up, and tell me to ask that question (but nothing about the reasons for asking it). If, from the answer to the question, I can tell which is the human and which the computer, then the test has been failed by the program, and the "veil of ignorance" behind which Turing wished me operate has been left intact.

14. There is still a serious discrepancy between Ideal Disneyland and AI. As discussed above, in order to show that Ideal Disneyland is a fraud, I could dig into the ground and expose the device that causes the simulated earthquake. In order to make my discovery a meaningful piece of data, however, I would have to compare my discovery to my prior knowledge of the causes of real earthquakes. If I had no knowledge of the geological determinants of real earthquakes, I might well dig into the Disney ground, find the relevant device, and conclude that all earthquakes are caused by such a device. This has great bearing on the dilemma with which the AI researcher is faced. The fact of the matter is, in large part, that we just don't know what circumstances are necessary and sufficient for the instantiation of cognitive states and processes. Rather than digging down into a ready-built machine, however, we build them up from scratch, attempting to engineer them so that they will behave in certain ways we know truly cognitive entities to behave.

15. I think the closeted assumption underlying this strategy is one that is also found implicit in coherence theories of semantics. The idea is that there is an implicit belief that there is only one way for a system to be if it has to exhibit an indefinitely large predetermined set of features. As John Haugeland (1985) puts it, if I give you only a couple of short letter strings and ask you to decode them (i.e., translate them into English), you're going to have a very difficult time because of the indeterminacies involved. You may well find two, three, or many more equally plausible alternative decodings, and have no way to choose among them. What you need are more strings in the same code. These will allow you to falsify some of the various hypotheses consistent with the initial set of strings. For instance, imagine that you are given the strings "clb" and "fjb". You would likely surmise that the last letter of both is the same. But exactly what letter it is impossible to tell. All you know about the other two letters in each string is that none of them are the same. Thus, your decoding hypotheses would include all pairs of three-letter words that end with the same letter, and have no other identical letters among them; quite a large set. Next imagine that the strings "bzb" and "czc" were added to your decoding set. With these two extra examples you would be able to rule out many hypotheses consistent with the first two. You now know that whatever "b" represents can begin a word as well as end it, and that whatever "c" represents can end a word as well as begin it. Your knowledge of English might also lead you to surmise (though not conclusively assert) that "z" represents a vowel, and that probably "l" does too. Given more strings you would be able to narrow down the possibilities to a precious few, and ultimately to a unique one.

16. A similar process is assumed to guide AI research. There are endless programs that will get a computer to utter a single grammatical English sentence. By parity of argument, there must be far fewer that will enable it to utter 100 grammatical English sentences. And, ultimately, there must only be one that can enable it to utter any one of the infinite grammatical sentences of the English language. And, if you find that one program, since it is unique, you must have the program that guides our production of English sentences. This is, I think, the AI researcher's secret belief.

17. John Searle (1992) has characterized the assumptions of the Artificial Intelligensia somewhat similarly in this regard. He writes: "The idea [in computational cognitive science research], typically, is to program a commercial computer so that it simulates some cognitive capacity, such as vision or language. Then if we get a good simulation...we hypothesise that the brain is running the same program as the commercial computer..." (p.217). As Searle goes on to point out, however, there are some serious difficulties with this approach: "Two things ought to worry us immediately about this project. First, we would never accept this mode of explanation for any function of the brain where we actually understood how it worked at the neurobiological level [e.g. frog vision]. Second, we would not accept it for other sorts of systems that we can simulate computationally [e.g., Can word processors be said to "explain" the functioning of typewriters?]." (p.217)

18. In effect, the AI researcher just builds Disneyland after Disneyland, attempting to capture more and more of the world's "behavior" with each new model, in the hope that eventually one will capture all of the world. What is more, it is believed that such a Disneyland would somehow rise up out of the realm of the "artificial" and actually be a world. "Isn't this," it might be asked, "just what science does? Aren't AI programs just models or theories of cognition, and isn't physics, for instance, just the attempt to build models or theories that more and more closely represent the behavior of the world?"

19. In a superficial way, yes; but in what I think is a much deeper way, the answer might be no. Computer programs are not theories of cognition in the same way that the laws of physics are theories about the world. The reason is that physics has (idealized) physical entities that its theories operate over in a way that psychology does not. To put it a little crudely, the difference is that physics knows what it is talking about, e.g. pendula and falling bodies, and to the extent that these idealized entities correspond to real entities, physics works. Psychology does not, in this sense, know what it is talking about. The debate still rages over whether the "thought", presumably at least one of the ground-level entities of psychology, is to be regarded as a propositional attitude, whether it belongs in our ontological vocabulary at all, and if so, how it is related to the other entities in our ontology. No one has ever disagreed with the assumption that explanations of falling bodies and the movements of the planets are among the key goals of physics. In psychology there is no similar consensus, in no small measure because the entities of psychology do not seem to be physical in any straightforward way (in spite of repeated efforts to "reduce" them to behavior, neural activity, and the like).

20. To this extent, it is unlikely that any AI program, no matter how clever, no matter how fascinatingly human-like in its behavior, could ever gain consensus as an example of true intelligence. By those who disagreed with its basic psychological building blocks, it would be dismissed as a contrivance; a machine that is very cleverly programmed, but that cannot seriously be countenanced as a plausible model of real psychological functioning.

21. What would Fodor give us instead? Presumably exactly what he has given us already - an extended discussion of and debate on the sorts of phenomena that must be accounted for by any widely acceptable theory of psychology, and of whether and how those phenomena might be, in principle, instantiated in a computational system. Twenty-five years ago Fodor (1968) claimed that there was something "deeply, conceptually wrong" with psychology. Those deep conceptual difficulties remain with us, and, boring as it may seem to some, until they are sorted out I see little hope that a machine is going to come along that will either solve or dissolve them. Each program will be dedicated, either explicitly or implicitly, to a certain set of basic psychological entities, and that choice, more than its behavior, will determine who buys in and who sells off. This is not just ideological prejudice, but a reflection of the fact that, in the final analysis, behavior is not all that we are interested in. Internal structure and function are even more important.

22. So, despite repeated attempts over the last century to declare philosophy irrelevant, archaic, dead, or otherwise unpleasantly aromatic, philosophy is still with us. Machines come and machines go, but the same problems that dogged Skinner, Hull, and Watson, Heidegger, Husserl, and Brentano, Hume, Berkeley and Locke, Kant, Leibniz, and Descartes are with us still. In effect AI is the attempt to simulate something that is not, at present, at all well understood, and I judge the odds of success to be of about the same order as the odds of a toddler rediscovering the architectural principles governing the dome while playing with building blocks.

REFERENCES

Dennett, D. C. (1991). Granny's campaign for safe science. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Fodor, J. A. (1968). Psychological explanation: A introduction to the philosophy of psychology. New York: Random House.

Fodor, J. A. (1981a). The mind-body problem. Scientific American, 244, 114-123.

Fodor, J. A. (1981b). Introduction: Something on the state of the art. In Representations: Philosophical essays on the foundations of cognitive science (pp. 1-31). Cambridge, MA: Bradford.

Fodor, J. A. & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71.

Fodor, J. A. (1991). Replies. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Haugeland, J. (1985). Artificial intelligence: The very idea. Cambridge, MA: MIT Press.

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417- 424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Searle, J. (1984). Minds, Brains and Science. Cambridge, MA: Harvard University Press.

Searle, J. (1992) The Rediscovery of the Mind. Cambridge, MA: MIT Press/Bradford.


Volume: 11 (next, prev) Issue: 061 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: