Christopher D. Green (2000) Different Realisms; Different Instrumentalisms. Psycoloquy: 11(067) Ai Cognitive Science (7)

Volume: 11 (next, prev) Issue: 067 (next, prev) Article: 7 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(067): Different Realisms; Different Instrumentalisms

DIFFERENT REALISMS; DIFFERENT INSTRUMENTALISMS
Reply to Furedy on Green on AI-Cognitive-Science

Christopher D. Green
Department of Psychology,
York University
Toronto, Ontario M3J 1P3
Canada
http://www.yorku.ca/faculty/academic/christo/

christo@yorku.ca

Abstract

Furedy argues that computational cognitive science could not possibly be realistic in its commitments, and that its underlying instrumentalism is the source of its problems as a scientific venture. I reply that he and I are using the terms "realism" and "instrumentalism" in quite different ways. I try to disentangle the misunderstanding, and show the sense in which I think cognitive science can be a science with realistic commitments.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. Furedy (2000) takes issue with my claim that a significant point in favour of CF is that it lends itself to realism about the mental, whereas behaviorism lends itself to instrumentalism or eliminativism. His argument is summed up in his final sentence where he claims that, "it is not the 'behaviorist', but the instrumentalist tactic which is the root of the problem. And it is not simply AI... but the instrumentalist approach that is not the right method not just for cognitive science,... but also for the science of psychology" (p. 10).

2. I find this critique problematic on a couple of counts. The first is that I think he and I are using the terminology of "realism" and "instrumentalism" in quite different ways. The second is that we have drawn our basic claims from fundamentally different sources of literature, and so there is a doubled chance of our just talking past each other, but I will give it a try.

3. Furedy begins by asserting that "a realist approach would require a computer theory and not just a model of the mind (in an instrumentalist view, there is no clear distinction between theories and models, since the true/false category has been given up)" (para. 6). The claim seems to be that "theories" entail ontological commitments, whereas mere "models" do not. I do not use the terms this way, but there is no reason not to. The distinction is reasonable enough. But the further claim that truth value flies out of the window with the adoption of instrumentalism is only implied if one holds a very specific view of what counts as truth. Instrumentalists do not give up on truth per se; they simply have a different criterion of truth: the theory that gives the best predictions is the true one. Realists have another criterion for truth, viz., that in addition, the theoretical terms used must refer to real entities. Furedy seems to assume the realist criterion, and then claim that non-realists have given up truth altogether. Presumably he would also dismiss coherence theories of truth as not being theories of truth at all. Note that the issue here is not whether they are TRUE theories, but whether they are theories at all. This is a pretty tendentious move, and in the final analysis, simply begs the question. We already know that realism and instrumentalism are incompatible. The argument is not advanced by pointing out that, in particular, realists do not accept the instrumentalist criterion for truth.

4. Furedy goes on to make the astounding claim that no realist computational theory of mind "would ever be presented seriously, because it is patently false" (para. 1). He can't mean this to be taken literally. "Strong" AI just IS the realist view of computational psychology, and its falsity is not patent to the tens of thousands of researchers committed to that position today. The evidence that Furedy marshals for his view is that "mentation is at least partly influenced by affective and conative factors" and that AI "ignores all cognition that is not computational", but this is not sufficient. To begin with, there are many attempts within the AI literature to give computational accounts of conation--I take at least some of the current research in robotics to be just such attempts--and there are even computational theories of emotion currently about (cf. Keith Oatley, Philip Johnson-Laird). That such accounts are not currently to everyone's liking is not important here, only that the very things Furedy believes AI leaves out are currently the subjects of intense scrutiny by some members of the computationalist community. His example of the chess-program falls short on the same count. It is the project of AI just to give a computational account of the perceptual and intuitive aspects of chess-playing (or to show, our pre-theoretic belief notwithstanding, that such aspects are not needed).

5. Now, of course, there are many who believe (Furedy presumably among them) that the project of giving computational accounts of intuition and affect are doomed to failure. Interestingly, Fodor himself has expressed doubts that computationism will ever have much to say about the related problems posed by qualia and consciousness (and as an existence proof contra the earlier claim that computationism can't be realist, Fodor is such a realist, par excellence). This does not doom the whole project; it just restricts it to a theory of cognition, rather than one of the whole of psychology. There is no a priori reason to believe that the processes that mediate thought are the same as those that give rise to feelings.

6. The impression that Furedy and I are just talking past each other, however, becomes most acute when he says that "the shift from (S-R) behaviorism to the (S-S) cognitive approach in psychology was not, primarily, in terms of the relevant evidence (which are realist grounds), but rather in terms of theoretical fruitfulness (which are instrumentalist grounds)" (para. 2). I don't accept this distinction, as it stands, and I did not intend it when I made the original remark. Both realists and instrumentalists (or my account of what counts as each of these) require both evidence and theoretical fruitfulness. Fruitfulness is just a way of saying that the theory has implications, that it can be used to generate predictions. Evidence is what is used to decide whether such predictions are true. The distinction between realism and instrumentalism that I was putting forward is between those who believe that their theoretical entities (i.e. "intervening variables", in the old behaviorist language) have referents in the real world (i.e. "are true", to put it simply, but not entirely accurately), and those who are agnostic, or negatively-inclined, on such matters, but accept them anyway on account of their "usefulness", or some such expedient.

7. Behaviorism can't, by its very definition, be overtly realist about mental states and processes. Some (metaphysical behaviorists) were outright eliminativists--they believed that all there is to behavior is, well, behavior. Others (methodological behaviorists) believed that mental states could be postulated to the degree that they helped make predictions about behavior, so long as they were suitably anchored to "behavioral" or "operational" definitions (see Green, 1992, on the failure of this approach). The uncomfortable resort, among some behaviorists, to terms such as "incorporeal" and "insubstantial", that Furedy cites, demonstrates the bind in which behaviorists found themselves. Did they really mean to claim, their much-lauded materialism in tow, that certain "bodiless" entities were to be included in their ontology? Not in the presence of philosophers, please!

8. Tolman, who is a particularly interesting case in point, could not really evade the problem either, try as he might. Early on (1932), he wrote that his mental postulates were not to be considered real; only as "inference tickets" (Ayer's term) on behavior. The problem was, some of these intervening variables (such as "behavior adjustments") did not have the "explicit and univocal linkages" to behavior that the theory required; they were just covert concessions to mentalism. Finally, at the very end of his career, Tolman (1959, p. 94) conceded that his mental postulates had been drawn from his own phenomenology.

9. So, to return to the main issue, "strong" AI not only lends itself to, but seems very nearly strictly to imply, realism about mental states. If you don't like that particular brand of reality (viz., mental states and processes are real, and they are just a species of computation), so be it. Don't be a member of that club.

REFERENCES

Furedy, J.J. (2000) Is instrumentalism the right method for psychological science? PSYCOLOQUY 11(066) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.066.ai-cognitive-science.6.furedy http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.066

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Green, C. D. (1992). Of immortal mythological beasts: Operationism in psychology. Theory and Psychology, 2, 291-320.

Tolman, E. C. (1932). Purposive behavior in animals and men. New York: Appleton-Century.

Tolman, E. C. (1959). Principles of purposive behavior. In S. Koch (Ed.), Psychology: A study of a science (Vol. 2, pp. 92-157). New York: McGraw-Hill.


Volume: 11 (next, prev) Issue: 067 (next, prev) Article: 7 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: