Keith Oatley (2000) Fodor's Sneer. Psycoloquy: 11(080) Ai Cognitive Science (20)

Volume: 11 (next, prev) Issue: 080 (next, prev) Article: 20 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(080): Fodor's Sneer

Commentary on Green on AI-Cognitive-Science

Keith Oatley
Centre for Applied Cognitive Science
Ontario Institute for Studies in Education
Toronto, Ontario M5S 1V6


AI is not THE method for cognitive science. It is something much better. It is A method that we did not have before. In this article I make three points: 1. It is curious that many people who discuss AI have not themselves written AI programs. The experience of programming is to understanding patterns of inference what the experience of working with differential equations is to understanding certain kinds of motion. 2. For Fodor to sneer at AI as "Disneyland" is much as it would have been, four hundred years ago, to sneer at Galileo for playing with balls. 3. With GOFAI and connectionist AI, we understand many deep psychological principles better than we did previously. We can explore synthetically as well as analytically; we can make models that actually work as well as doing experiments and conceptual analyses.


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. To make my contribution to the debate started by Green (1993/2000) with his target article on whether AI is the right method for cognitive science, I would like to say three things. First, there seems to be a good deal of interest in AI, but mainly as an abstract entity. To illustrate (and I hope a personal note is suitable for this publication): when I came to Toronto some three years ago there was not, as far as I could tell, a hands-on course in AI at the undergraduate or graduate level that would allow students to do some programming and acquire in the process some basic AI concepts. I thought this odd, but decided that such a course was important, and that I had better put one on. The course was full and it went fine, but it later occurred to me that this was not what lots of faculties with interests in cognitive science want. They want to discuss AI generally, but not to use AI methods to do particular things, not to apply its principles, and not to teach such matters to students.

2. In a similar way, in Green's article, and the commentaries, only a minority of the participants refer to specific work in AI, or imply that they actually write programs. It is rather as if, following the introduction of differential equations into physics, many people had found these methods tantalizing but somehow to be held at arm's length. It is as if such people would discuss with great energy whether the formalisms with mysterious symbols such as dx/dt were really any good as theoretical methods, or whether they were a passing fad that would never catch on, or whether they had already suffered their demise, etc.

3. AI is not THE method for cognitive science. It is something much better. It is A method that we did not have before. It has rather little to do with making androids that do or don't pass Turing's test, or with modeling gross observable variance. And the issue is not one of legislating for what is and is not good science. Instead, AI is a method that allows us to get a firm theoretical grip on certain problems in vision, reasoning, planning, language, and so on, where before this method existed our grip was much weaker. It is a method that enables us to compare biological systems we don't understand, like humans, with technical systems that we do.

4. Before AI, for psychologists there was only experimentation (well, almost only experimentation). Now, with AI, if we are interested in vision, we can see what principles are involved in making a system that will analyse video input to guide a robot to pick up a glass of milk. If we are interested in action or the understanding of narrative texts, we can investigate the successes and shortcomings of planning programs. If we are interested in deep dyslexia, we can make lesions in neural net models to see what might be involved in the mistakes made by people who have suffered certain kinds of brain damage. For philosophers, before AI there was conceptual analysis. Now with the availability of the experience of programming and the intuitions to be derived from it, there is the possibility not just of analysing concepts, but of adding to our concepts and to the ways they can be used. We no longer have to be content with casting around hopefully among the furniture of the ordinary world for metaphors. We can see what it takes to create metaphors for mind. Because computers can support simulations, programs can be better metaphors for the brain's capacity to simulate the world, than (say) ripples on the surface of a pond, plumbing systems, filing cabinets, and so on.

5. Second, let's consider Fodor's sneering metaphor of AI as Disneyland. Of course, if one wants to be a successful rhetorician, it's good to have a striking image. Clearly, "Disneyland" has worked well. Green and several commentators have been struck by it. But Fodor's image is thoughtless. Here is the last part of the quotation from Fodor, as cited by Green: "Physics, for example, is not the attempt to construct a machine that would be indistinguishable from the real world for the length of a conversation. We do not think of Disneyland as a major scientific achievement." Let me transpose this back nearly four hundred years to the time when Galileo was experimenting with rolling balls down inclined surfaces to dilute the force of gravity in order to measure its effects. One can imagine a former-day Fodor: "Physics is the question of how the heavens are disposed, and of the ultimate nature of the movements of the celestial spheres. It is not an attempt to play with balls. We do not think of a game of marbles as a major scientific achievement."

6. Third, since the advent of AI, both GOFAI and the connectionist kind, we have been able to understand many psychological principles much more deeply than previously. Now we can express theories in a formal language of computation; we can explore synthetically as well as analytically; we can make models that actually work, as well as doing experiments and conceptual analyses; we can start to see how knowledge might be represented and used rather than just see how stimuli are related to responses--all these have substantially added to our understanding. And, since Green has inveighed against mere pronouncement, let me comply and make not a pronouncement, but a suggestion of an ocular demonstration. Anyone waiting to compare the extent to which AI has deepened and broadened our understanding in the area of vision (for instance) in a manner that will not just fade like an unfixed photograph, might like to look at two books in the same (Helmholtzian) tradition, Richard Gregory's very good "Eye and brain" from the pre-AI era, and David Marr's "Vision", which is of course based on AI. (I had thought I had better mention Marr, because in the whole of the issue devoted to Green's target article and commentaries, no-one else did.)


Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Green, C. D. (1993). Is AI the right method for cognitive science? Cognoscenti 1: 1-5.

Gregory, R.L. (1966). Eye and brain. London: Weidenfeld & Nicolson.

Marr, D. (1982). Vision. San Francisco: Freeman.

Volume: 11 (next, prev) Issue: 080 (next, prev) Article: 20 (next prev first) Alternate versions: ASCII Summary