Christopher D. Green (2000) Computational Theories are Verbal Theories. Psycoloquy: 11(063) Ai Cognitive Science (3)

Volume: 11 (next, prev) Issue: 063 (next, prev) Article: 3 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(063): Computational Theories are Verbal Theories

COMPUTATIONAL THEORIES ARE VERBAL THEORIES
Reply to Blackburn on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo/

christo@yorku.ca

Abstract

Blackburn (2000) argues that AI is a "superior methodology" relative to traditional verbal theories of psychological processes. I agree that there are many advantages to couching one's theories in computational terms, but they are verbal nonetheless. Writing them as programs may FORCE one to be precise, but this precision could be achieved in the absence of computers just as well.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
    REPRINT OF: Green, C. D. (1993). Ontology rules! (But not
    absolutely). COGNOSCENTI: Bulletin of the Toronto Cognitive Science
    Society 1: 21-28.

I. INTRODUCTION

1. First of all, I'd like to thank all those who took the time and effort to comment on the target article (Green 1993/2000). This sort of cross-talk is precisely what is needed in cognitive science. I found all the commentaries interesting, and even exciting. I can imagine little else that could provoke me to think and re-think, in as many different ways, the opinions I expressed in the target article as to go through the process of open-peer review. You can't dismiss criticisms that come from outside your home discipline, as one is sometimes wont to do, simply because you don't fully comprehend them or their import. You must come to grips with them as best you can, and try to forge a common vocabulary with each critic. In short, I have found this a very valuable and rewarding process. I can only hope that my critics can say as much about me after reading my replies.

2. The problem of ontology seems to haunt this paper. Almost everyone took an explicit or implicit swipe at the implication I seemed to make near the end of the target article that we must finish our ontology before we start our science. I'll leave the details of specific responses to individual critics, but I have one general point to make with regard to this issue. It is, of course, ridiculous to claim that science cannot begin until ontology is finished. The entire history of science shouts out against such a claim. I still believe, however, (and this is all I really meant to claim) that there must be some degree of consensus on matters ontological before a more than nominally-unified psychology can begin to take shape. Without such consensus, the result would be a proliferation of divergent schools of thought, each pursuing its own goal, similar to what is described by Kuhn in "Structure of scientific revolutions" (1970).

3. Moreover, whereas in, say, the study of electricity, there were only four or five schools of thought, in psychology there are scores. Although, as I write below, I am not entirely in agreement with Fodor's call to AI researchers to "down tools and become philosophers", I feel some sympathy with the underlying motivation, born of a frustration with some cognitive scientists' habit of making grand pronouncements on foundational issues in cognitive science without so much as an argument in support of their position, or one so weak that the breath of a philosophical undergraduate could blow it down (see Minsky, 1993, donning the literary form of Plato, for particularly silly explications of consciousness and free will). If Fodor's call, minus the hyperbole, just means "yell less, think more", then I am on his side.

II. BLACKBURN

4. Blackburn (2000) begins by saying that if I "really accepted the Mind As Computer, [I] would refrain from making ... a distinction between the artificial and the real" (para. 2). First off, I never claimed to accept "strong" AI, or even CF, for that matter. But it doesn't matter. The argument is about whether one who IS committed to computational functionalism as an ontology is also committed to "strong" AI as a methodology. If one were unconscionably strict about the matter, the answer would, trivially, be "No"; no ontology commits one to any methodology in particular, although some methodologies might be ruled out. For instance, if one believes that there are no such things as thoughts, then presumably one is blocked from employing rationalist means of investigation (the Churchlands not withstanding), though exactly what method should be used is an open matter.

5. More to the point, however, the real question is, is it REASONABLE or even ADVISABLE to use "strong" AI to investigate matters of mind, if one is a computational functionalist? Even if the answer were "Yes", there is no reason to believe that this blocks questions of the cognitive reality of given computational systems. To deny this is to believe, apparently with the likes of John McCarthy, that ALL computations are instances of cognition, including those instantiated in word processors, calculators, thermostats, and (if John Searle (1992) and Hilary Putnam (1988) are to be believed, respectively) even walls and rocks. Although McCarthy is deadly serious, I have always taken this to be a REDUCTIO on his version of "strong" AI. But things are not so dark for "strong" AI-ists in general. A "strong" AI-ist could say, just as most actually do, that only CERTAIN types of computation instantiate cognition, but not all. Those species of computation that can produce cognition-like behavior, but that are not among those that instantiate actual cognition, might be appropriately labelled "artificial", even by "strong" AI-ists. Therefore, I reject Blackburn's claim that the distinction between "real" and "artificial" intelligence is not open to the "strong" AI-ist.

6. In any case, no matter how you cut it, the distinction between the real and the artificial is conceptually independent of that between the simple and the complex. It may well turn out that there are certain cases of simple computation that are perfectly good instances of real cognition, and no one (except perhaps McCarthy) would deny that there are cases of complex computation that are not instances of cognition. Even it were to turn out that all cases of cognition are computational complex, this connection would be merely contingent, and therefore not support Blackburn's claim that the simple-complex distinction can somehow explicate and replace the artificial-real distinction.

7. Blackburn goes on to contrast computer programs, considered as theories of cognition, with "traditional verbal theories" (para. 3). Verbal theories, he says, suffer from "a lack of precision and [from] hidden assumptions," as well as from the sin of "offer[ing] relatively crude predictions" (para. 3). Although I would not for a minute doubt that all manner of ill-conceived, poorly worked-out theories have been offered up to science at one time or another, I reject his distinction. Computer programs, qua theories of mind, just ARE a species of verbal theory. They are nothing but sets of propositions, albeit sets drawn from a constrained set of allowable propositions (in comparison with that available to natural language). These constraints do, as Blackburn claims, make such theories more precise (not to be confused with being more CORRECT). Similar constraints on mathematical theories (such as Newton's Laws of Motion) make them more precise, but no less "verbal" for all that. Mathematical equations are, after all, just propositions. The particularly interesting thing about computational theories is that we have designed a machine that will automatically generate their implications; something that has traditionally fallen to the minds (sometimes none too sharp) of theoreticians.

8. As I wrote in the target article, however, the advantages of computational theories may be outweighed by some other considerations. Computers cannot be expected to answer our ontological questions. The very adoption of "strong" AI assumes a particular answer--or at least a particular class of answers--but that does not count as an argument for that view. The justification for adopting CF will have to be extra-computational (as it traditionally has been from people like Putnam, Fodor, and the like). Even once we are "inside" CF, so to speak, and trying to decide which computational architectures are the "right" ones for psychology, the simple expedient of seeing which program(s) work(s) best is an inadequate criterion, unless we wish to slip into a very extreme pragmatic theory of truth (viz., whatever "works" is true). Even William James, the pragmatist par excellence, did not go this far (cf. e.g. Robinson, 1993).

9. Blackburn concludes with the claim: "if AI programs continue to be developed which offer better predictions of human behavior, we can be satisfied that cognitive science is progressing despite the foundational problems" (para. 4). For my part, this is entirely too sanguine a view. The history of science is littered with the bodies of research programs that seemed "progressive" at the time, but turned out to rest on false ontological assumptions. If CF is wrong, I, for one, would like to know as soon as possible. The way to show it is wrong, just as does the way to justify it, falls primarily outside the bounds of computation itself. This does not mean, as Blackburn imputes to me, that we should "close down the labs"--a theme to which I will repeatedly return in these comments. It means only that the answers to some of our questions cannot be found in the labs (qua labs, anyway; thinking can be done almost anywhere!).

REFRENCES

Blackburn, A.B. (2000) Computational Functionalism. PSYCOLOQUY 11(062) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.062.ai-cognitive-science.2.blackburn http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.062

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Kuhn, T. S. (1970). The structure of scientific revolutions (2nd ed.). Chicago: University of Chicago Press.

Minsky, M. (July, 1993). Alienable rights. Discover, 24- 26.

Putnam, H. (1988). Representation and reality. Cambridge, MA: MIT Press.

Robinson, D. N. (1993). Is there a Jamesian tradition in psychology? American Psychologist, 48, 638-643.

Searle, J. R. (1992). The rediscovery of mind. Cambridge, MA: MIT Press.


Volume: 11 (next, prev) Issue: 063 (next, prev) Article: 3 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: