Kim J. Vicente & Catherine M. Burns (2000) Overcoming the Conceptual Muddle:. Psycoloquy: 11(074) Ai Cognitive Science (14)

Volume: 11 (next, prev) Issue: 074 (next, prev) Article: 14 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(074): Overcoming the Conceptual Muddle:

OVERCOMING THE CONCEPTUAL MUDDLE:
A LITTLE HELP FROM SYSTEMS THEORY
Commentary on Green on AI-Cognitive-Science

Kim J. Vicente & Catherine M. Burns
Department of Industrial Engineering
University of Toronto
Toronto, Ontario M5S 3G3
Canada
http://www.mie.utoronto.ca/labs/cel/overview/director.html

benfica@mie.utoronto.ca

Abstract

Many basic problems and disagreements in cognitive science are due to unresolved yet fundamental conceptual muddles. It is difficult to progress beyond this state unless we work towards the definition of a coherent set of basic terms that can be consistently used by all cognitive scientists. In this commentary, we provide a set of definitions of concepts that are regularly used in systems theory. We then use these concepts to illustrate some of the difficulties that Green points to, and to derive some claims regarding the plausibility of AI as the methodological cornerstone of cognitive science.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. We agree wholeheartedly with Green (1993/2000) that many basic problems and disagreements in cognitive science are due to fundamental conceptual muddles that have yet to be resolved. It is very difficult to progress beyond this state unless we try to work towards the definition of a coherent set of basic terms that can be consistently used by cognitive scientists, regardless of their theoretical beliefs. As a step in this direction, in this commentary we will provide a set of definitions of concepts that are regularly used in systems theory. We will then use these concepts to illustrate some of the difficulties that Green points to, and to derive some claims regarding the plausibility of AI as the methodological cornerstone of cognitive science.

2. What follows is an informal set of definitions of basic terms which are fundamental to systems theory (see Golomb, 1968, and especially Miller, 1982, for more details).

    A. Natural System: a system as it occurs in nature, as opposed to a
    description of such a system. Natural systems are high-dimensional
    since they possess an unbounded number of properties.

    B. Equivalence Class: a category or set defined by a specific set
    of criteria (e.g., relations or attributes) for membership.
    Elements belong to a common equivalence class if they are identical
    along the dimension(s) specified by these criteria (e.g. the class
    of objects which are red). Equivalence classes are used in models
    to abstract, and therefore represent, specific attributes of a
    natural system.

    C. Formalism: a mathematical language without semantics or context
    dependence which can be used to construct a model.

    D. Model (formal system): a description or representation of a
    natural system based on some formalism. A model is low-dimensional
    (i.e. it is an abstraction) since it only captures a small subset
    of the attributes which characterize the natural system (those
    attributes defined by the equivalence classes used in the model).
    Thus, it is important not to confuse the model with the natural
    system being modelled (i.e., "don't eat the menu!", cf. Golomb,
    1968).

    E. Product Model: a model which, given the input signal to the
    modelled system, generates the same output signal. This is
    typically known as a "black-box" or "input-output" representation.

    F. Process Model: a model which faithfully captures the
    intermediate signal transformations of the modelled system.

3. These definitions help both to identify and clarify some of the conceptual muddles that characterize cognitive science. For instance, Green (para. 6) defines weak AI as a simulation of cognitive processes and strong AI as an instantiation of cognitive processes. But what is the difference between an "instantiation" and a "simulation"? Surely, no one would argue that an AI program is a brain or an example of a brain! Taking this for granted, then one can only view "instantiation" to mean the same as "simulation"--both are models of cognitive processes. The primary difference seems to be that weak AI is concerned with developing product models (no psychological claims regarding process are made) whereas strong AI is concerned with the development of process models. If we are clearer on this and other distinctions, then it is easier to evaluate how well AI can live up to the claims it is making.

4. Another example of a conceptual muddle is the question: Does AI relate to psychology, as Disneyland relates to physics? As it stands, the question is impossible to answer since it is under-specified. There is an unbounded number of possible relations between Disneyland and physics. Thus, one must be explicit about which relation is being referred to before one can decide whether the same relation applies between AI and psychology. To take an extreme example, if one were to pick alphabetical order as the criterion for comparison then Disneyland would come before physics, and AI comes before psychology. Therefore, one would answer the question in the affirmative. But of course this is not what is intended at all. What is the point then? Green states that Disneyland does not have any "real" rivers, animals, and so on. Thus, Disneyland must be a model of the real world in some sense. The next question is, what attributes are being captured by the equivalence class relations defining the model?

5. Green states that "within certain constraints of normal action...the Ideal Disneyland is indistinguishable from the real world" (para. 8). Two points need to be made here. First, "certain constraints of normal action" are not the same as the constraints that usually guide scientific investigation. That is, as scientists, we must try to gain access to the black box. Second, given that we have established that Disneyland is a model of the real world, it does not make sense to say that the Ideal Disneyland is indistinguishable from the real world. There are, by definition, many properties of the world that are not represented in the model (e.g. geographical location).

6. These two examples show that the concepts that are sometimes used by cognitive scientists are ill-defined and internally inconsistent. Until this problem is cleared up, there can be no agreement on questions like the Disneyland issue, simply because the questions being posed are not sufficiently defined as to permit a defensible answer. The systems theory concepts defined earlier help to identify and begin to clarify such fuzziness. They can also be used to make some claims regarding the plausibility of AI as the method for cognitive science.

7. To begin with, a Turing machine is a formalism, not a model. An important question is how much of a constraint does it provide cognitive scientists with? As Green points out, this is an issue that needs to be addressed if one is to avoid making vacuous claims. One important answer is that a Turing machine can model any computable function. Furthermore, an infinite number of different programs can be written to model the same function (contra the implicit belief identified by Green, 1993/2000, para. 9). Clearly, this is a very limited constraint! The power of AI (and Turing machines) is that they are so flexible that they can be used to model almost anything in a multitude of ways. This may be good news for computer scientists, but it is very bad news for cognitive scientists. In particular, it leads to the conclusion that just because one develops a program that mimics some behavior, one cannot conclude that this program has any psychological content whatsoever.

8. And what about the Turing test? It is clear that it is a test of product, not process. Thus, again no psychological claims can be made due to the enormous number of degrees of freedom available. The important point here is that other formalisms or AI tricks are not going to provide that added constraint. What we need is evidence to determine whether the processes used by any program are similar to the processes used by people. One example would be to conduct perception experiments to determine what information people are actually picking up. Avoiding this step can seriously compromise any attempt at modelling cognitive processes (Vicente & Kirlik, 1993). A second class of constraint could be obtained by using process tracing methods (Woods, 1993) to identify the strategies used by people during problem-solving. Both types of evidence would constrain psychologically plausible models, or in other words, allow us to distinguish fancy programming from psychological theorizing. The flexibility of AI formalisms makes it very easy to pass off the former as the latter.

9. In conclusion, AI may provide a rich source of models and techniques but until these models are tested against psychological evidence and under realistic psychological constraints, they cannot claim to have any relevance for cognitive scientists. Therefore, AI alone cannot be the method for cognitive science.

REFERENCES

Golomb, S. W. (1968, January). "Mathematical models - Uses and limitations", Astronautics & Aeronautics.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Miller, R. A. (1982). Formal analytical structure for manned systems analysis. Columbus, OH: Department of Industrial and Systems Engineering, The Ohio State University.

Vicente, K. J. & Kirlik, A. (1992). "On putting the cart before the horse: Taking perception seriously in unified theories of cognition," Behavioral and Brain Sciences, 15, 461-462.

Woods, D. D. (1993). "Process tracing methods for the study of cognition outside of the experimental psychology laboratory". In G. A. Klein, J. Orasanu, R. Calderwood, and C. E. Zsambok (Eds.), Decision making inaction: Models and methods (pp. 228-251). Norwood, NJ: Ablex.


Volume: 11 (next, prev) Issue: 074 (next, prev) Article: 14 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: