Dan L. Chiappe & Andre Kukla (2000) Artificial Intelligence and Scientific Understanding. Psycoloquy: 11(064) Ai Cognitive Science (4)

Volume: 11 (next, prev) Issue: 064 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(064): Artificial Intelligence and Scientific Understanding

ARTIFICIAL INTELLIGENCE AND SCIENTIFIC UNDERSTANDING
Commentary on Green on AI-Cognitive-Science

Dan L. Chiappe & Andre Kukla
Department of Psychology
University of Toronto
Toronto, Ontario M5S 3G3
Canada

danilo@toronto.edu kukla@psyc.utoronto.ca

Abstract

Jerry Fodor is in the paradoxical position of being an avid defender of the computational theory of mind, while at the same time being one of the main detractors of AI research. This paper elucidates Fodor's position by showing that his criticisms are methodological in nature. Much AI research proceeds by modeling gross observable regularities. It is argued that this strategy betrays a deep misunderstanding about the nature and purpose of scientific research. Moreover, such an approach to psychology will be unable to answer any deep theoretical problems.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. In his target article, Green (1993/2000) discusses why Jerry Fodor, an avid supporter of the computational theory of mind, is also one of the main detractors of artificial intelligence research. As Chris Green points out, Fodor's criticisms of AI stem from his rejection of the modeling of gross observable variance as a way of turning a general ontological picture into a full-blown SCIENCE of the mind. The problem is that modeling gross observable variance by writing computer programs does not answer the basic ontological questions any more than Disneyland helps to elucidate the fundamental structure of the physical world (Fodor, 1991a). The methodology of AI does not enable cognitive scientists to deal with problems about the nature of rationality, intentionality and consciousness. As Green's analysis strikes us as being for the most part correct, what we will do in this paper is provide some remarks by way of elaboration and clarification.

2. The first thing to point out is that Fodor's criticisms of AI research betray a deeper suspicion of what he takes to be the third dogma of empiricism (Fodor, 1991b). This is the empiricist assumption that the purpose of scientific activity is to predict experiences, or to "save the appearances". For example, Quine (1953) claims that "As an empiricist I continue to think of the conceptual scheme of science as a tool, ultimately, for predicting future experience in the light of past experience" (p. 252). The assumption rests upon the empiricist idea that the notion of "data" can be reconstructed using the psychological categories of "experience" or "observation."

3. This position strikes Fodor as being quite absurd. As Fodor says: "Could the goal of scientific activity really just be to make up stories that accommodate one's sensory promptings? Would that be a way for a grown man to spend his time?" (1991b, p. 202). According to Fodor, it is much more reasonable to suppose that the purpose of science is to increase our knowledge about the world, and that predicting experiences is simply a means of establishing whether our theories are true. After all, Fodor adds, "...if all you want is to be able to predict your experiences, the rational strategy is clear: Don't revise your theories, just arrange to have fewer experiences; close your eyes, put your fingers in your ears, and don't move. Now, why didn't Newton think of that?" (1991b, p. 202).

4. Thus AI has inherited the third dogma of empiricism; that the sole purpose of scientific theories is to be able to save the phenomena. Once the phenomena are saved, science is done. Hence once you program a computer in a way that passes the Turing test, you have completed the science of psychology. Fodor's point is that this puts the cart before the horse. It is certainly the case that an adequate theory of psychology is one that can make the correct predictions, but making predictions is not the telos of science. The purpose of science is to increase our understanding about how natural phenomena operate. It is the goal of coming to understand the nature of the processes underlying the appearances.

5. To drive the point home, consider the possible world where some hacker stumbles upon a program that enables a computer to pass the Turing test. Is the creation of such a program ipso facto a scientific breakthrough? The answer, according to Fodor, would be an unequivocal "No!" This is because what we would have is a model that is as complicated as the real thing. We would have a vastly complicated computer program that manages to reproduce human intelligent behavior. Our understanding of psychological processes would not be increased one iota by such a model. Pylyshyn (1984) makes the point the following way:

    "Even if it were true that a Turing machine can behave in a manner
    indistinguishable from that of a human... such behavioral mimicry
    is a far different matter from providing an explanation of that
    behavior. [Explanation] entails capturing the underlying
    generalizations as perspicuously as possible and relating them to
    certain universal principles" (p. 53-54, our italics).

Hence, we would still have the task of coming up with the right psychological generalizations, and creating the right conceptual schemes for the purpose of capturing these regularities. The science of psychology would remain undone.

6. What all of this points to is the fact that, for Fodor, proper science involves "carving nature at the joints," what Ian Hacking (1991) has referred to as "Plato's unsavoury rubbish." It involves creating the kind of conceptual scheme that, through gradual development, enables us to grasp the most salient regularities in a particular domain. This is what Hempel (1965) refers to as the systematic import of a conceptual scheme. When a conceptual scheme allows us to formulate nomological principles that lead to successful predictions, we have grounds for believing that we have captured the essential mechanisms in that field of inquiry, that the kinds we employ are NATURAL kinds.

7. Now, solely modelling gross observable variance does not enable scientists to pursue the sort of systematization that is constitutive of good science. In the case of psychology, merely reproducing intelligent behavior does not ipso facto lay bare the skeletal structure of cognition. It does not get you the nomological laws of psychology any more than Disneyland does the laws of physics, which Green fails to mention. Hence the price the cognitive science community has paid for relying on AI's modelling of gross observable variance is that our understanding of cognitive processes is still well nigh non-existent, save a few empirical details. The nature of psychological mechanisms remains clouded, waiting to be penetrated by some sage's sun-like gaze.

8. All of this is most evident in the case of general reasoning processes, which have proved to be most recalcitrant to computational modelling. The main reason for this is that our general reasoning processes, such as those involved in language comprehension, seem to be intractably holistic. They are sensitive to properties of the entire belief system (Fodor, 1983; Haugeland, 1979). In the case of language comprehension, for example, human beings can comprehend any of an infinite number of sentences in their natural language, which means that they have to be able to supply the relevant context for any of these sentences (Sperber and Wilson, 1986). This means that any item in the knowledge base has to be potentially available at any given time.

9. The problem of explaining the holistic nature of cognition is called the "frame problem", and the discovery of this problem is probably AI's sole contribution to the psychology of thought. Basically this is the problem of explaining how rational cognitive systems manage to realize relevance. It is the problem of explaining how they manage to bring the relevant information to bear in the processing of new information without performing an exhaustive search through the entire knowledge base.

10. Most AI attempts to deal with the holistic nature of cognition have used various heuristics in order to pre-specify the relevance of information. This is essentially the strategy underlying the idea of frames, scripts and schemas (e.g. Schank and Abelson, 1975). Computers are programmed with blocks of information that enable them to formulate expectations in stereotypical situations. These schemas contain all of the information that is allegedly relevant in each of these situations.

11. There are various problems with this approach, as a mere moment of reflection will bring to light. First of all, a solution to the frame problem is presupposed by this approach since presumably we build up our frames and schemas through our experience in these situations, and acquiring patterns of relevance presupposes that we can pay attention to relevant information. Moreover, schemas don't explain how people manage to pay attention to relevant information in non-stereotypical situations. In these situations, human beings have the competence to reason about relevance in a way that a computer (at least so far) cannot.

12. In short, AI research has come up empty as far as our general reasoning processes are concerned. As Fodor says, "...the attempt to develop general models of intelligent problem-solving...has produced surprisingly little insight despite the ingenuity and seriousness with which it has often been pursued" (1983, p. 126). Nor do the strategies that are currently being offered give us any grounds for hope [NOTE 1].

13. According to Fodor (1983), the main problem is that the conditions for a successful science do not seem to be met in the case of higher cognitive processes. "The conditions for successful science (in physics, by the way, as well as psychology) is that nature should have joints to carve it at: relatively simple subsystems which can be artificially isolated and which behave, in isolation, in something like the way that they behave in situ" (p. 128). Our general reasoning mechanisms, with their holism and contextual sensitivity, do not seem to satisfy this criterion.

14. To conclude, what AI researchers have failed to do is address the fundamental question of whether there can even BE a science of reasoning. To answer this question, what we need to do is to try to develop a conceptual scheme that enables us to explain how general reasoning processes can have their effects, or at least show that it cannot be done, as Fodor (1983) seems to suggest. More programming is not going to answer this question. As Fodor (1987, p. 148) says, what AI researchers have to do is "down tools and become PHILOSOPHERS ...One feels for them. Just think of the cut in pay!"

NOTES

[1] McDermott (1987), for example, discusses the "sleeping dog strategy," in which only those facts that are pre-specified to change as a result of a particular action are updated. As Fodor (1987) shows, however, this strategy runs off our intuitions about inductive relevance, sans formalizing those intuitions. In addition, the new connectionist approach seems to have the right sort of holistic properties that are required, but unfortunately, what we have with these systems are models that are as complex as the phenomena we want to explain.

REFERENCES

Fodor, J. A. (1983). Modularity of mind. Cambridge, MA: MIT Press.

Fodor, J. A. (1987). Modules, frames, fridgeons, sleeping dogs, and the music of the spheres. In Z. Pylyshyn (Ed.), The robot's dilemma: The frame problem in artificial intelligence. Norwood, NJ: Ablex Publishing Corporation.

Fodor, J. A. (1991a). Replies. In B. Loewer and G. Rey (Eds.), Meaning in mind: Fodor and his critics (pp. 255-319). Cambridge, MA: Blackwell.

Fodor, J. A. (1991b). The dogma that didn't bark (a fragment of a naturalized epistemology). Mind, 100, 398, 201-220.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Hacking, I. (1991). A tradition of natural kinds. Philosophical Studies, 61, 109-126.

Haugeland, J. (1979). Understanding natural language. Journal of Philosophy, 76, 619-632.

Hempel, C. (1965). Fundamentals of taxonomy. In Aspects of scientific explanation and other essays in the philosophy of science (pp. 137-154). New York, NY: The Free Press.

McDermott, D. (1987). We've been framed: Or, why AI is innocent of the frame problem. In Z. Pylyshyn (Ed.), The robot's dilemma: The frame problem in artificial intelligence. Norwood, NJ: Ablex Publishing Corporation.

Pylyshyn, Z. (1984). Computation and cognition: Toward a foundation for cognitive science. Cambridge, MA: MIT Press.

Quine, W. V. O. (1953). Two dogmas of empiricism. In P. Moser and A. Vander Nat (Eds.), Human knowledge: Classical and contemporary approaches (pp. 241-253). New York: Oxford University Press.

Schank, R. & Abelson, R. (1975). Scripts, plans and knowledge. Proceedings of the fourth international joint conference on artificial intelligence, Tbilisi. Re-published in P. Johnson-Laird and P. Wason, (1977) (Eds.), Thinking. Cambridge, England: Cambridge University Press.

Sperber, D. & Wilson, D. (1986). Relevance: Communication and cognition. Cambridge, MA: Harvard University Press.


Volume: 11 (next, prev) Issue: 064 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: