Christopher D. Green (2000) Engineering (reverse or Otherwise) is not all of Science. Psycoloquy: 11(079) Ai Cognitive Science (19)

Volume: 11 (next, prev) Issue: 079 (next, prev) Article: 19 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(079): Engineering (reverse or Otherwise) is not all of Science

Reply to Harnad on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3


Harnad argues that cognitive science is just reverse engineering. I argue for a broader conception of the field: finding out what mind is, not just how to build one kind of mind. I think many of our differences stem from that fundamental one.


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
    REPRINT OF: Green, C. D. (1994). Night of the living Disney II.
    Cognoscenti:  Bulletin of the Toronto Cognitive Science Society,
    no. 2, 40-41.

1. First of all, I'd like to thank all my commentators once again. An extended version of my original article (Green 1993/2000) appeared in 1996 with many revisions and improvements that would not have been realized were it not for the thoughtful work of my critics. Now on to the hard business of responding to Harnad (2000).

2. Harnad claims that cognitive scientists need not do ontology-- that this is the domain of basic sciences and philosophy--and that cognitive science is just reverse engineering. I think that, in the final analysis, this comes down to a terminological dispute. There is certainly some reverse engineering done under the auspices of cognitive science, but to say that this is ALL that is done is to reduce cognitive science to cognitive engineering. I think this can only be maintained if one is willing to reduce physics to physical engineering as well, and I take it that no one is. Cognitive SCIENCE just is the discipline in which we try to figure out what minds are (but see the paragraph below), and how they work. If this requires some basic science and philosophy, then so be it. My ontological point was somewhat sociological in nature, the main point being that it seems unlikely that any AI program will resolve disputes grounded in beliefs about what sort of thing, if any THING at all, the mind is.

3. Harnad also puts an interesting limitation on cognitive science to which I would have objected at the time I wrote the original paper, but that I am now coming to accept. I don't think that this is good news for cognitive science, however. He says "never mind mental states; they're just something we HOPE to capture" (para. 9). Thinking, as I once did, that cognitive science should, at least ultimately, aim to explain the WHOLE of mental function, I am tempted to respond that mental states are the very core of cognitive science; that merely to produce "performance capacities totally indistinguishable from our own" (para. 7), as Harnad puts it, without explicitly figuring out how qualia fit into the equation, is just so much "hacking."

4. I have just written a paper (Green 1996) in which I argue that the rise of cognitive science was not, contrary to popular opinion, a whole hog return to mentalism, after decades in the desert of behaviorism, but rather an attempt to bring back into psychology only those aspects of the mental that clearly lend themselves to logical analysis, viz., those of which it could be said that they are true or false. This basically includes beliefs alone. Desires were allowed to come along for the ride as well only because they hold "felicity" conditions (as J.L. Austin likes to put it) that allow them to be subjected to a very similar sort of analysis. What was implicitly DISallowed, however, were those aspects of the mental that had caused the problems leading to the behaviorist revolution in the first place: consciousness, qualia, feeling, emotion, etc. Many have since tried to sneak them back in, often "cleansed" of their essentially subjective character, but they remain strangely mysterious and highly resistant to scientific understanding.

5. This does not mean that consciousness etc. are altogether "out of bounds" for the purposes of research. Rather, it means that the agenda of cognitive science is a good deal smaller, and correspondingly less interesting, than we might have once thought. To return to the words of Fodor (1992, p. 5), "Nobody has the slightest idea how anything material could be conscious. Nobody even knows what it would be like to have the slightest idea about how anything material could be conscious." Learning how subjective experience is produced is at least as interesting to me as, say, what sort of computer program is able to replicate the information processing capacities of the visual system. If cognitive science can't help me do the former, so much the worse for cognitive science.

6. To return to Harnad's commentary, I think it is important to notice how many cornerstones of traditional cognitive science he allows to crumble in his effort to protect AI. He explicitly allows that "COMPUTATIONALISM... has been shown to be supremely unlikely" (para. 5). If computationalism falls, then AI, at least as traditionally conceived, would seem to have some serious explaining to do. Among the various aspects of computationalism that he allows to fall is implementation-independence; the idea that minds can be moved around from hardware to hardware. According to Harnad, they are stuck, as it were, in the hardware in which they are "grounded", to use his term. So much for Minsky's highly-touted solution to the problem of mortality (viz., "backing-up" one's mind for insertion in another body at a later date)!


Fodor, J. A. (July, 1992). The big idea: Can there be a science of mind? Times Literary Supplement. pp. 5-7.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061)

Green, C.D. (1996). "Where did the word 'cognitive' come from anyway?" Canadian Psychology, 37 (1), 31-39.

Green, C.D. (1996) "Fodor, functions, physics, and Fantasyland: Is AI a Mickey Mouse discipline?" Journal of Experimental and Theoretical Artificial Intelligence, vol. 8 (1) 95-106.

Harnad, S. (2000) The convergence argument in mind-modelling: Scaling up from Toyland to the Total Turing Test. PSYCOLOQUY 11(078)

Volume: 11 (next, prev) Issue: 079 (next, prev) Article: 19 (next prev first) Alternate versions: ASCII Summary