Stuart Shanker (2000) The Demise of AI. Psycoloquy: 11(072) Ai Cognitive Science (12)

Volume: 11 (next, prev) Issue: 072 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(072): The Demise of AI

Commentary on Green on AI-Cognitive-Science

Stuart Shanker
Department of Philosophy
York University
York University
Toronto, Ontario M3J 1P3


Green (1993/2000) draws attention to the importance of the following questions for those who are interested in reassessing and reorienting the cognitive revolution: How could two such dissimilar movements as AI and cognitive psychology ever have joined together? How could the concerns of the cognitive revolution have been so quickly usurped by those of AI? How can cognitive science avoid the risk of sliding back into computational, or some newer version of mechanist, reductionism?


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. The demise of AI as the paradigm of Cognitive Science has been remarkably swift. As little as ten years ago psychology and philosophy journals were filled with debates over the Turing Test and Searle's Chinese Room argument, the nature of consciousness, the success of heuristic programs such as EURISKO and AM, the prospects of Expert Systems, endless squabbles over the importance of constructing 'toy domains' in pattern recognition or concept-formation studies. Now, quite suddenly, the most influential figures in AI are abandoning it en masse. Bruner has left it for a form of psychological pluralism (in which transactional and cultural psychology play a leading role); Neisser has become a prominent exponent of ecological psychology; Boden has moved on to connectionism; even Fodor has jumped ship. Soon Simon and Minsky will be left standing alone on the listing deck of AI.

2. Green's (1993/2000) target article could not have come at a more propitious moment. The questions which he raises are both topical and important if we are to come to terms with the AI phenomenon. It is not so much a matter of clarifying the origins of AI; sociologists of knowledge have already made great strides in this area (see Bloomfield 1987). What remains to be established is the psychological significance of AI. Why were the advocates of the cognitive revolution so attracted to the post-computational mechanist revolution that occurred independently? How did the latter usurp the former so quickly? This is an especially puzzling question when one considers Green's point about the behaviorist overtones of AI (see Shanker, in press). What, if any, are the benefits of the countless hours spent on protocol analysis?

3. Significantly, the convergence once displayed by philosophers and psychologists in their attitudes towards the significance of AI is beginning to fragment. The former are still largely concerned with sweeping questions about the nature of psychological explanation. The latter are much more interested in specific issues: e.g., the effect of a computational approach on such problems as how one is able to recognize in the flash of a second a face one has not seen in many years, or to come up with the answer to a question after a troubled night's sleep, or how a young child, on being told 'That's a bird', is able to recognize and correctly label a different bird.

4. It is here, in the trenches so to speak, that AI has been downgraded to the status of one of many possible rival theories. What are the reasons for the sudden collapse of AI as THE paradigm of Cognitive Science? Green raises two very different answers to this question. The first is that AI faltered because it was overly ambitious: the goals it set itself were simply beyond its capabilities (rather like Babbage). The second, much more profound answer, is that the foundations of AI are conceptually flawed. Green cites the interesting remark by Fodor that "the 'whole enterprise of GOF AI is ill-founded'. Not because it's got the wrong picture of the mind, however, but because it has a bad methodology for turning that picture into science." But what if, as Green put it, "Psychology does not, in this sense, know what it is talking about" and, indeed, continues to suffer from "deep conceptual difficulties"?

5. Is AI even the problem here, or is it rather the epistemological framework that led to the miscegenation of cognitive psychology and post-computational mechanism in the first place? Early on in the cognitive revolution Bruner expressed the hope that "perhaps the new science of programming will help free us from our tendency to force nature to imitate the models we have constructed for her" (Bruner 1960:23). Was it computationalism that failed here, or was it the prior, 'functional' definition of concepts, which made AI seem so appealing at the time? That is, does the problem lie in the 'limitations' of computational models, or in the fundamental premise that "Good correspondence between a formal model and a process--between theory and observables, for that matter--presupposes that the model will, by appropriate manipulation, yield descriptions (or predictions) of how behavior will occur and will even suggest forms of behavior to look for that have not yet been observed--that are merely possible" (Bruner 1959: 368).

6. As Green points out, the computer (rightly) disappears from discussions of the significance of soft AI. For, as Ayer once remarked, the most interesting part of AI is not the question of whether computers think, it is whether thinkers compute. It may well be that, as Green contends, the primary benefit of computational formalization is that it forces the theorist to be utterly explicit about exactly what is being claimed by each premise of the theory. But AI involves so much more than this: it presupposes that the task of Cognitive Science is to discover "a correspondence between a mental operation and some formal model of that operation" (Bruner 1959: 368). One need only think of the work done by Adriaan de Groot or Newell and Simon on the 'pre-conscious processes' taking place 'beneath the threshold of introspection' (de Groot's dubious term) in problem-solving to realize how much more is going on in computational reductionism (see de Groot 1965, Newell and Simon 1961).

7. Thus the real question which Green's paper confronts us with is whether the demise of AI is due to it overreaching itself or to its persisting commitment to Cartesian metaphysics (see Shanker 1992, 1993). Green remarks at one point how "Machines come and machines go but the same problems that dogged Skinner, Hull and Watson, Heidegger, Husserl, and Brentano, Hume, Berkeley and Locke, Kant, Leibniz, and Descartes are with us still."

8. If anything should give us pause before hastening to embrace yet another mechanist paradigm it is this disquieting insight. In the height of their popularity, AI-theorists used to boast of how they would soon resolve all of these long- standing philosophical problems concerning the nature of mind. Now it is time to ponder whether the solution to these philosophical problems lies outside the realm of philosophy, and whether psychology must begin its search for a new foundation (see Shanker, forthcoming).


Bloomfield, B. (1987). Questions in Artificial Intelligence. London: Croom Helm.

Bruner, J. S. (1959). "Inhelder and Piaget's The Growth of Logical Thinking", General Psychology, 50.

Bruner, J. S. (1960). "Individual and Collective Problems in the Study of Thinking" Annals of the New York Academy of Science, 91.

De Groot, A. D. (1965). Thought and Choice in Chess. The Hague: Mouton Publishers, 1978.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Newell, A. & Simon, H. A. (1961). "GPS, A Program that Simulates Human Thought", in E.A. Feigenbaum & Julian Feldman (Eds.), Computers and Thought, New York, 1963.

Shanker, S. G. (1992) "In Search of Bruner", Language & Communication, 12.

Shanker, S. G. (1993). "Locating Bruner", Language & Communication.

Shanker S. G. (1994). Ape Language in a New Light. Language & Communication 14(1): 59-85

Shanker, S. G. (1995). Turing and the Origins of AI. Philosophia Mathematica 3: 52

Volume: 11 (next, prev) Issue: 072 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary