Christopher D. Green (2000) Empirical Science and Conceptual Analysis go Hand in Hand. Psycoloquy: 11(071) Ai Cognitive Science (11)

Volume: 11 (next, prev) Issue: 071 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(071): Empirical Science and Conceptual Analysis go Hand in Hand

EMPIRICAL SCIENCE AND CONCEPTUAL ANALYSIS GO HAND IN HAND
Reply to Plate on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo/

christo@yorku.ca

Abstract

Plate evinces a "business-as-usual" attitude common among engineers and other scientists. In the case of cognitive science, however, I do not think it will serve him well. As I attempted to show in my target article, we do not yet really understand the foundation of cognition, much less psychology as a whole, and Plate's casual use of phrases such as "essence of intelligent behavior", without providing a hint of what that "essence" might be, shows how far we are from our goal. Although I do not believe that we should stop the empirical side of cognitive science, it seems clear that there is still a great deal of conceptual analysis left to be done before we will be in a position to make any great strides in cognitive science as a whole.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. Some of Plate's (2000) comments lead me to believe that he and I are talking at cross-purposes. The distinction that he tries to draw between AI and cognitive science is not entirely clear to me, but his frequent resort to the phrase "intelligent behavior" makes me suspect that he is only trying to point out that some AI-ists are working to make machines do useful things, and have little interest in whether they do them in the ways humans do. If this is the proper explication of his meaning, then we have no disagreement, and I, as a cognitive scientist, have little interest in their work (but note that I take their use of the word "intelligent" to be metaphorical).

2. For instance, MYCIN (Shortliffe, 1976) was designed to help physicians diagnose and treat infectious diseases, but no one (save, perhaps, McCarthy) thinks for a minute that it is actually cognitive, that it does its diagnostic work in anything like the way humans do. In fact, one of its advantages as a consultative aid to physicians is that it doesn't do things that same way and, thus, is unlikely to make the same errors as a human. Of course, one could argue that computer programs might be authentically cognitive without having the same underlying mechanisms as humans, but such a claim would have to be bolstered by a heck of a lot of conceptual analysis showing the program's performance to be authentically cognitive without being like a human's. Such an analysis has not been forthcoming--the Turing test notwithstanding. In any case, the AI-ists that I am interested in (and Fodor too) are those who have made explicit cognitive claims for their programs. These include McCarthy, Simon, Newell, Minsky, and a host of others.

3. Plate's "business-as-usual" attitude is, I think, typical of AI-ists. While philosophers worry and fuss about "metaphysical" questions, scientists get on with the business at hand, working out problems in the lab that philosophers have quibbled and quarrelled about for decades, even centuries, at a time. It will probably come as no surprise that I, in large part, reject this view. The philosopher's side of the story is that many of those problems that philosophers have traditionally worried and fussed about come back to haunt scientific research program after scientific research program, and that the history of science shows more than a few pseudo-solutions to "metaphysical" problems that just wouldn't go away. I think that the AI-ists' response to the frame problem (the broad one that philosophers like to talk about, not the narrow one that AI-ists like to talk about) is typical of these.

4. People like Minsky and Schank thought that the way to solve the frame problem was to build "frames" (or "scripts" or "schemas" or "prototypes" or what have you) right into the programs. This was a "hack", pure and simple, born of a "businesslike" attitude to even the deepest conceptual problems. The problem with the frame problem, however, is just that humans are so good at solving it, and in so many circumstances. A set of standardized "frames" built into the system itself is the last thing that will solve the problem unless, of course, you only wanted to "solve" it only for certain situations specified in advance, such as identifying dogs and going to restaurants. But this is no solution at all because it misses the very essence of the problem.

5. To return more specifically to Plate's comments: unlike him, I do not believe it to be uncontroversial to claim that "a system whose behavior is indistinguishable from that of people must have captured something of the essence of intelligent behavior" (p. 13). First of all, the claim ignores the problem of defining indistinguishability that I took great pains to discuss in the target article. Second, one is not in a position to establish a claim such as Plate's unless one is in a position to explain (and justify) what the "essence of intelligent behavior" is. And one cannot do this without a fair bit of conceptual analysis.

6. It must be clear, however, that I'm not much more enamoured with the Turing Test than Plate is. The reason I brought it up in the target article is because of Fodor's allusion to it. I don't think that rejecting it, however, undercuts Fodor's general argument at all. If there is another, better, criterion of intelligence afoot, then bring it forward and we'll have a look. Stevan Harnad (who, by the way, argues very strongly, contra Plate, that the Turing Test is a scientific criterion of intelligence, 1992) has attempted to replace it with a better criterion that he calls the Total Turing Test (TTT). Under the TTT, the computer program not only has to be indistinguishable from humans in its intellectual powers, but also in its (descriptions of its) qualitative (i.e. sensory, perceptual, affective, emotional, etc.) mental states. If Harnad is right, then the tide is turning against Plate's attempt to narrow the scope of interest to just intellectual processes and to turn attention away "from (other) elements that constitute the essence of being human." It is worth noting that these "other elements" are mental, nonetheless, and are therefore within the purvue of the cognitive scientist. (Compare Plate's plea with, for instance, Furedy's argument that the failure of AI to account for "other elements" falsifies CF outright.)

7. Finally (again), there is the problem of defining (or at least adequately characterizing) psychological entities, such as thoughts. Plate says that we can define many of the ones we need already, but his examples are stimuli, which are not psychological, and behavior, by which I assume he means sheer bodily motion, which is not psychological either. The problem is getting some consensus on what comes in between. In any case, as I have said several times in these replies (no doubt because I was not clear enough in my initial statements), I do not think that we have to "wait until philosophers have it all worked out by themselves." I think that the conceptual problems in cognitive science are everyone's problems, and everyone should work on solving them. But one must recognize them first, and this in itself has not been an easy task.

REFERENCES

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Harnad, S. (1992). The Turing test is not a trick: Turing indistinguishability is a scientific criterion. SIGART Bulletin 3(4): 9-10. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.turing.html

Plate, T. (2000) Caution: Philosophers at work. PSYCOLOQUY 11(70) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.070.ai-cognitive-science.10.plate http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.070

Shortliffe, E. H. (1976). Computer-based medical consultations: MYCIN. New York: American Elsevier.


Volume: 11 (next, prev) Issue: 071 (next, prev) Article: 11 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: