Philip David Zelazo (2000) The Nature (and Artifice) of Cognition. Psycoloquy: 11(076) Ai Cognitive Science (16)

Volume: 11 (next, prev) Issue: 076 (next, prev) Article: 16 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(076): The Nature (and Artifice) of Cognition

THE NATURE (AND ARTIFICE) OF COGNITION
Commentary on Green on AI-Cognitive-Science

Philip David Zelazo
Department of Psychology
University of Toronto
Toronto, Ontario M5S 3G3
Canada

zelazo@PSYCH.TORONTO.EDU

Abstract

It ought to be uncontroversial that human cognition cannot be explained merely by constructing a device that behaves outwardly like a human being (i.e. that maps the same inputs onto the same outputs) without regard for how it accomplishes that end: there are too many means to the same end. An important part of the functional role of a mental state consists in the causal relations between that mental state and other mental states: internal interactions matter. Even "blind" AI has heuristic value. Cognitive science needs not wait for consensus concerning the fundamental nature of mind. On the other hand, it is very likely to help us answer the difficult ontological questions.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.

I. INTRODUCTION

1. Green (1993/2000) makes Fodor's claim that AI makes bad cognitive science seem radical when in fact it is (or ought to be) uncontroversial, at least among those who believe that cognitive science is the study of (the subset of?) cognitive processes that are actually found in (though not necessarily limited to) human beings. However, Green's more important mistake is his unduly pessimistic pronouncement concerning the present possibilities for computational (or even cognitive) psychology.

II. FODOR'S UNCONTROVERSIAL CLAIM

2. Regardless of whether or not one is a computational functionalist (and so believes that human cognitive processes, such as intending to X, properly simulated in a computer, would actually BE more instantiations of those processes qua functions), Artificial Intelligence, defined as an effort to model gross observable variance is not, on its own, the best method for cognitive science. The problem derives from the problem of induction: the same effects can be produced in a variety of ways. AI, as opposed to (even strong) computational psychology, tries to produce intelligence or intelligent behavior per se, and is not typically concerned about the structural or algorithmic relation between computer intelligence (as an engineering feat) and human intelligence (part of the proper study of mankind). Green is correct on this point: cognitive scientists are concerned with internal structure and function, not (merely) the replication of behavior by theoretically-unconstrained means. As Loewer and Rey (1991) note in their introduction to the book that contains Fodor's quip about Disneyland, "Such replication is neither necessary nor sufficient for a computational understanding of mind" (p. xviii). They also specifically echo Green's charge against the Turing test: "This is an exceptionally narrow behavioristic test that would be anathema to any self- respecting functionalist" (Loewer & Rey, 1991, note 27).

3. This is not to say that AI is altogether useless for cognitive science (qua the study of HUMAN cognition). Consider Searle's (1992, quoted in Green, 1993) description of the role of AI in cognitive science. If AI researchers successfully simulate some complex behavior, they may "hypothesize that the brain computer is running the same program as the commercial computer." So far so good. The fact that some program behaves overtly like a human being is a perfectly reasonable basis for the mere hypothesis that this particular program actually accounts for human behavior. But it certainly isn't proof that the hypothesis is correct, which is why Searle writes that "we would never accept this mode of explanation for any function of the brain where we actually understood how it worked at the neurobiological level..." The fact that some people (who are these people?) may, egregiously, accept successful behavioral simulation as the sole criterion of explanatory adequacy does not undermine the role of AI as an aid to hypothesis generation. The hypothesis so generated will need to be tested by further comparison with human behavior, and if it is disconfirmed, then the simulation will no longer be successful (by definition).

4. However, overt human behavior is not the only means of evaluating the correspondence between the brain's program and the computer's program. Any information (including, for example, neurobiological information) about the brain's program certainly ought to count among the criteria of the success (or lack thereof) of the modelling endeavour, if the goal of modelling is to describe accurately the brain's program at the level of computational function.

5. This raises the question of the appropriate level of functional equivalence: As cognitive scientists, we are interested not just in WHAT computation is accomplished, but in HOW the brain accomplishes that function. As functionalists, we maintain that we can study the brain's function independently of its hardware. This is not to say that we're not interested in the brain, or that knowledge about the brain may not be able to provide useful constraints that inform the study of the brain's function, just that we need not wait until neuroscience is complete before we can proceed.

III. GREEN'S PESSIMISTIC PRONOUNCEMENT

6. After dismissing AI as THE right method for cognitive science, Green asks, "What would Fodor give us instead?" (para. 21). According to Green, Fodor would give us "...extended discussion of, and debate on, the sorts of phenomena that must be accounted for by any widely acceptable theory of psychology, and of whether and how those phenomena might be, in principle, instantiated in a computational system" (para. 21). I take this to mean that Fodor (and Green, as he indicates elsewhere in the paper) would have us engage in philosophical discourse about some basic ontological distinctions before engaging in theory-building either of the computational or conceptual sort. "[It] is unlikely that AI will prove to be very psychologically enlightening until after some consensus on ontological issues in psychology is achieved" (p. 2); "Psychology does not...know what it is talking about" (p.18); "...philosophy is still with us in psychology" (p. 20). I have two replies to this.

7. First, Fodor (1991) in fact tells us what he would give us instead, and it ought to be easy on the ears of cognitive psychologists. Fodor (1991) writes, "We do not think of Disneyland as a major scientific achievement. I think you do the science of complex surface phenomena by trying to pick the complexity to pieces, setting up artificial (i.e., experimental) environments in which the underlying causes can be studied one at a time. This suggests that the science of mind is Psychology, not AI" (pp. 279-280).

8. Second, Green seems to be overly sensitive to the fact that the science of higher cognitive processes ("central processing" in Fodor's terms) has its work cut out for it. Fodor (1991, p. 280) asks, "Why are people so upset to be told that there are deep, unsolved problems about the mind? I thought EVERYBODY's granny knew that." Note that these problems must be faced by cognitive science in general, not just cognitive science via AI. But we need not (as Green would have us believe) achieve consensus about the ontology of psychology before we can proceed to pick the complexity to pieces, before we can begin to solve the problems about the mind. Is there consensus about the ontology of physics? In a discussion about quantum states, Penrose (1989) writes, "What kind of picture of 'physical reality' does this provide us with at the quantum level, where different 'alternative possibilities' open to a system must always be able to coexist, added together with these strange number weightings? Many physicists find themselves despairing of ever finding such a picture" (p. 243). Indeed, rather than put the ontological picture before the (research) program, physics and cognitive science are likely to be useful in answering questions about ontology: our ideas about ontology change as our sciences evolve.

9. An oversimplification of this complex interaction between our basic philosophical commitments and our science is inherent in Green's claim that "[it] is unlikely that ANY AI program--no matter how clever, no matter how fascinatingly human-like in its behavior--could ever gain consensus as an example of true intelligence. By those who disagreed with its basic psychological building blocks, it would be dismissed as a contrivance..." (para. 20). Now, Green is correct that only if one is a functionalist will one believe that the program embodies intelligence (per se). It is another issue whether one agrees that the program captures HUMAN intelligence (i.e., is it the same program?). But even if one is not a functionalist, one may well think that the program is a useful (though incomplete) model of real psychological functioning. And finally, if such a program were to be created, it would be a remarkable achievement: to the extent that one understood the nature of the program, it would be widely recognized to have profound (though not necessarily definitive) implications for the understanding of mind.

REFERENCES

Loewer, B. & Rey, G. (1991). Meaning in Mind: Fodor and His Critics (pp. xxxiii-xxxiv). Cambridge MA: Blackwell.

Fodor, J. A. (1991). Replies. In B. Loewer & G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Penrose, R. (1989). The Emperor's New Mind: Concerning Computers, Minds, and The Laws of Physics. Oxford: Oxford University Press.


Volume: 11 (next, prev) Issue: 076 (next, prev) Article: 16 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: