Christopher D. Green (2000) Did Systems Theory Show Instantiations and. Psycoloquy: 11(075) Ai Cognitive Science (15)

Volume: 11 (next, prev) Issue: 075 (next, prev) Article: 15 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(075): Did Systems Theory Show Instantiations and

Reply to Vicente & Burns on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3


Vincente & Burns claim to have shown, by the use of systems theory, that instantiations and simulation are the same thing. On the contrary, I think they have shown just how they differ. They also criticize my use of the term "normal action" for its lack of precision. While I agree it is not precise, one should be wary of claiming too much from learning how to explain action only in artificially-constrained domains. We seem to agree, however, that AI is only one among many methods to be used by the Cognitive Scientist.


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. Vicente & Burns's (2000) commentary gets my personal nod as the most interesting. These commentators have done an outstanding job of bringing the insights of another discipline--systems theory--to bear on the problems of psychology. This is what cognitive science, at its best, is all about. That said, I have some quibbles nonetheless. "Surely," they argue, "no one would claim that an AI program IS a brain..." (para. 3). Surely this is right (I think; it is always hard to tell what the Churchlands and Sejnowski are going to say next). But just as surely, there are those who would claim that an AI program is a MIND, and this is really where the issue lies. Still, if the difference between "weak" AI and "strong" AI can be summed up as the difference between seeking product models and process models, and this allows the distinction to be profitably linked up to systems theory in general, so much the better. I fail to understand, however, why Vicente and Burns take this to show that "instantiation" and "simulation" mean the same thing. It seems to me that they have, on the contrary, provided an elegant explication of the distinction.

2. Another intriguing comment that they make is that "'certain constraints of normal action' are not the same as the constraints that usually guide scientific investigation" (para. 4). I agree, and so much the worse for AI as a science if this is the case. I cannot deny that the Disneyland-physics relation is not a well defined one. The point of my paper was to investigate that relation, and its relevance to the AI-cognitive science relation. I like to think that I made at least some progress in that direction, though I do not claim it to be of mathematical precision. I appreciate their efforts to clarify the matter further.

3. Finally, though Vicente and Burns arrive at it by a very different route from mine, I am in basic accord with their conclusion that "AI ALONE cannot be the method for cognitive science" (para. 8, italics added). Though perhaps I have overstated my case somewhat in the hopes of getting people's attention, it is not my intention that everyone should roll closed their keyboard drawers and put forehead to fist for the remainder of their days. I simply think that there are some basic problems in psychology that programming is not equipped to solve, and that some--perhaps much--conceptual analysis is needed at the outset.


Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Vicente, K.J. & Burns, C.M. (2000) Overcoming the conceptual muddle: A little help from systems theory. PSYCOLOQUY 11(074)

Volume: 11 (next, prev) Issue: 075 (next, prev) Article: 15 (next prev first) Alternate versions: ASCII Summary