Matthias Scheutz (2000) Ai as a Method?. Psycoloquy: 11(097) Ai Cognitive Science (24)

Volume: 11 (next, prev) Issue: 097 (next, prev) Article: 24 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(097): Ai as a Method?

AI AS A METHOD?
Commentary on Green on AI-Cognitive-Science

Matthias Scheutz
School of Computer Science,
University of Birmingham
Edgbaston
Birmingham, B15 2TT
UK
http://www.cs.bham.ac.uk/~mxs/

mxs@cs.bham.ac.uk

Abstract

In his target article "Is AI the right method for cognitive science?" Green (2000) wants to establish that results in AI have little or no explanatory value for psychology and cognitive science as AI attempts to "simulate something that is not, at present, at all well understood". While Green is right that the foundations of psychology are still insufficiently worked out, there is no reason for his pessimism, which rests on a misconception of AI. AI properly understood can be seen to contribute to the clarification of foundational issues in psychology and cognitive science.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. Green's (2000) major qualms with AI can be found in the last two paragraphs of his target article, where he claims that until the "deep conceptual difficulties of psychology " are sorted out there will be "little hope that a machine is going to come along that will either solve or dissolve them". This is so, because "each program will be dedicated, either explicitly or implicitly, to a certain set of basic psychological entities, and that choice, more than its behavior, will determine who buys in and who sells off." In the end, it is not the behavior, but what brings it about, that is going to be of explanatory value, and in this respect AI will not be of any help to psychology, according to Green, since AI programs are essentially build on poorly-understood psychological entities. Green concludes that the chances of AI eventually succeeding are "of about the same order as the odds of a toddler rediscovering the architectural principles governing the dome while playing with building blocks". I believe that this pessimism is unfounded and mostly based on a distorted conception of AI.

2. For many people AI as a research discipline is a very inhomogeneous field spanning from engineering to cognitive modeling with the overall goal of understanding and producing artifacts that exhibit intelligent behavior. Since researchers in AI have different interests and subgoals, it is crucial to differentiate which group in AI one is talking about. Take the engineering side of AI, for example. Its exponents do not care whether their models are biologically and/or cognitively plausible; all that matters to them is that their programs perform the intended tasks according to specification. While it is true that there is a lot of AI research that attempts to "engineer [machines] so that they will behave in certain ways we know truly cognitive entities behave" (Green), it neither follows that these machines are automatically claimed to be truly cognitive, nor does it follow that they are not truly cognitive (e.g., just because they might be physically different from natural cognitive systems), unless one makes additional assumptions such as "weak AI". It is important to realize that AI as such is not committed to constructing (only) machines that implement cognitive functions exactly the way humans do (at the relevant level) -- this is a misunderstanding of AI cooked up and also largely perpetuated by philosophers. To say then, that AI will not be a solution to the problems of psychology, as Green claims, is only acceptable if it is understood at the same time that AI has never set out to solve the foundational problems of psychology in the first place!

3. Interestingly, Green seems to assume that programs are theories of some sort as implied by his comment that "computer programs are not theories of cognition in the same way that the laws of physics are theories about the world." Programs are NOT theories -- what kind of theory would the factorial program be? They are descriptions of particular kinds of processes (computational ones) and as such they are formal, but that is about the only similarity they have to (physical or psychological) theories. Neither are any of the notions of "provability", "truth", "reference", etc. relevant to program descriptions in the way they are to theories; nor do programs make claims about anything, unless one wants to view the specification of an algorithm as making a claim about what will happen at what time (this is studied by theoretical computer science under the name "semantics of programming languages"). But even then, these are not claims about the entities and their properties on which the algorithm operates. While AI researchers might have a particular psychological theory in mind when they write programs, it is not the program that is the theory, although the program might implement some aspects of the theory. In their attempt to understand and build intelligent systems, AI researchers may use established psychological theories or may simply rely on home-made psychologies, or may not use any theory at all when they write programs that exhibit a particular behavior. At best, a program might inspire the construction of a theory.

4. There seem to be two essentially different ways to evaluate AI programs: one is a behavioral evaluation -- does the program get the behavior right? -- in which case no claims are made about cognitive plausibility or about what kind of entities the virtual machine implemented by the program might be. The other is to look inside the "black box" and see whether the computational/functional states that bring about the changes in output correspond to similar states in natural cognitive systems, i.e., whether the functional architecture they describe and implement is the same as the one realized in natural cognitive systems. The latter is the project of cognitive science and admittedly a very difficult project, as it requires that we have a detailed functional description of the organism we are interested in (mere behavioral evaluation will not be sufficient here).

5. While behavioral equivalence to human cognitive systems is a prerequisite for saying that something has human mentality, it will not be a sufficient condition for AI researchers interested in cognitive modeling, and hence it is at the very least debatable whether AI has "adopted the Turing test as a means of testing its success" (Green). I agree with Green that the Turing test is insufficient and too restrictive as a test of "real mentality" or "intelligence" (note that Turing, 1950, called it "imitation game" to avoid the debate about what the nature of "true intelligence"). For one, it is too species-specific for a general test of mentality in that it seems to test only "human mentality" (e.g., see French 1990); yet it is also inherently insufficient to test mentality as such, since single conversations can at best disprove that a system has mentality, but never establish it -- this is similar to the problem of confirming scientific theories: we cannot prove them correct. Yet, the more positive instances we have, the more we are convinced that the theory might be on the right track.

6. It seems doubtful that one would know what questions to ask to debunk a computer "passing the Turing test" if one could look inside and inspect its mechanism, as Green claims. Presumably, such a mechanism would be so complex that at the implementation level not much could be derived at all about the virtual machine that is responsible for the intricate behaviors (it is common wisdom among programmers that "disassembling" large computer programs is practically impossible). Furthermore, looking inside does not seem to be a legitimate move in the imitation game in the first place: if any information could be gained about the functioning of the system by looking inside, then it should always be possible to find something particular about one individual (human or computer) that is not shared among the other contestants in the Turing test. If this particular fact is then turned into an appropriate question, it can be used to reveal the identity of the "inspected" individual (e.g., take a human subject with a particular kind of brain damage, which does not allow the subject to recall anything that happened before the age of 10 and then ask specific questions about early childhood).

7. The whole debate about whether or not artificial systems have "real" mentality is in my view merely academic as long as it is not clear whether and how "real mentality" could be defined other than by saying through introspection: "I have it". To get anywhere close to an answer, a systematic study of different cognitive architectures and their functional capacities is required, and AI, as I see it, takes part in this endeavor by defining and implementing such architectures. Whether the strategy of extending architectures by adding behaviors/functions will succeed in getting us to human mentality is still an open question; neither extreme euphoria nor extreme pessimism are justified in my view at this point.

8. Often people who (such as Searle [1980] and Green) think this project is doomed to fail, underestimate the intricacies involved in "simply adding behaviors" to an existing system: new behaviors cannot be simply "added" to a given system the way one would add a food item to one's shopping cart, because only a very limited class of extensions will result in a system that does something meaningful beyond the capacities of the original system (if it does anything meaningful at all). To obtain a working system, programmers need to know many details about the functional organization of the overall system, about how to integrate behaviors, how behaviors interact, etc. Furthermore, a set of behaviors will impose a severe constraint on the kinds of architectures that can support all of them at the same time. It is this way of carefully studying existing architectures and their possible extensions that will allow us to gain better insights into the space of possible architectures, their subarchitectures, and their functional capacities. By viewing psychological concepts as essentially architecture based, AI, rather than depending on them, can actually help in clarifying and defining them!

REFERENCES

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

French, B. (1990) Subcognition and the Limits of the Turing Test. Mind 99(393): 53-65.

Searle, J. (1980). Minds, brains, and programs. Behavioral and Brain Sciences 3: 417- 424. http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.searle2.html

Turing, A.M. (1950) Computing Machinery and Intelligence. Mind 49: 433-460 http://cogprints.soton.ac.uk/abs/comp/199807017


Volume: 11 (next, prev) Issue: 097 (next, prev) Article: 24 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: