A. Boyd Blackburn (2000) Computational Functionalism. Psycoloquy: 11(062) Ai Cognitive Science (2)

Volume: 11 (next, prev) Issue: 062 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(062): Computational Functionalism

Commentary on Green on AI-Cognitive-Science

A. Boyd Blackburn
Department of Psychology
University of Toronto
Toronto, Ontario M5S 3G3


Green argues that we should reject AI on philosophical grounds, but his arguments are better understood as attacking computational functionalism. Thus, he fails to show how Fodor's support of computational functionalism and rejection of AI are compatible positions to hold. Despite such unresolved foundational issues, AI remains a superior methodology for developing psychological theories because of the limitations of traditional verbal theories.


artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. The central question Green (1993/2000) addresses is whether you can buy into the Mind as Computer theory of mentalism (as Fodor does with his theory of computational functionalism), and then reject (as Fodor also does) AI as an appropriate methodology for developing psychological theories. I will argue in this response that Green also chooses to reject AI but for reasons which are best understood as criticisms of computational functionalism. Thus, he has failed to show that Fodor's two positions are compatible. I will give two arguments for why they are incompatible.

2. In rejecting the AI approach to cognitive science, Green makes use of Fodor's Disneyland analogy. Disneyland is an "artificial" simulation of the "real" world in the same way that an AI program is an artificial simulation of real cognition. First of all, if Green really accepted Mind As Computer he would refrain from making such a distinction between the artificial and the real. Both an AI program and cognition as it is accomplished by the mind are instances of the same category: general-purpose information processing systems. The appropriate distinction to make is not between the artificial and the real, but between the simple and the complex. Second of all, when Green later argues that the AI program is in deep trouble because of a lack of consensus even about what psychology should be talking about, he is saying that there are deep foundational problems in psychology. Granting for now that he is right, such an argument is more appropriately construed as an attack on Mind As Computer than on AI as a method for pursuing Mind As Computer. Thus, Green has failed to show how believing in Mind As Computer and rejecting the AI method are compatible positions to hold.

3. Here are two reasons why Mind As Computer prescribes AI as the method of choice for cognitive science:

    A. In AI the theory is the program. This is to be contrasted with
    traditional verbal theories. Verbal theorists who have taken Mind
    As Computer to heart have avoided getting involved in writing
    programs by adopting (usually implicitly but sometimes explicitly)
    the idea that they will invoke no processes or structures which
    can't in principle be programmed. Such verbal theories suffer from
    a lack of precision and hidden assumptions. There is also the
    potential for theorists to run amok with the "programmable in
    principle" idea, filling their theories with hidden homunculi. AI
    programs guard against this.  Rejecting AI as the method for
    cognitive science leaves us with these flawed verbal theories.

    B. AI programs are also superior to verbal theories in that they
    provide specific predictions which allow us empirically to
    determine which theories are better. A program must be modified if
    its predictions don't match the data, and then it can be tested
    anew. On the other hand, verbal theories offer relatively crude
    predictions and when the data don't match the predictions,
    theorists most often resort to reinterpreting what they meant by
    their theory originally. The theory itself remains substantially
    intact. As theories become more complex, this issue of prediction
    becomes more salient. No matter how complex an AI program is, it
    will still yield definite predictions.  The more complex a verbal
    theory is, the more tenuous are its predictions.

4. In conclusion, there do appear to be deep foundational problems in psychology which will require rational analysis to solve. And perhaps these problems are unsolvable working within the Mind As Computer framework. If this is so, then AI researchers can expect to make little progress towards understanding the mind. Let the evidence for how much progress is made be empirical evidence. If AI programs continue to be developed which offer better predictions of human behavior, we can be satisfied that cognitive science is progressing despite the foundational problems, and there is no need for all of us to close down the labs and become philosophers.


Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Volume: 11 (next, prev) Issue: 062 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary