Arthur B. Markman (1998) In Defense of Representation as Mediation. Psycoloquy: 9(48) Representation Mediation (1)

Volume: 9 (next, prev) Issue: 48 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(48): In Defense of Representation as Mediation

IN DEFENSE OF REPRESENTATION AS MEDIATION
Target Article by Markman and Dietrich on Representation Mediation

Arthur B. Markman
Department of Psychology
University of Texas
Austin, TX 78712
http://www.psy.utexas.edu/psy/FACULTY/Markman/index.html

Eric Dietrich
PACCS Program in Philosophy
Binghamton University
Binghamton, NY
http://www.binghamton.edu/philosophy/home/faculty/index.htm

markman@psy.utexas.edu dietrich@binghamton.edu

Abstract

Some cognitive scientists have asserted that cognitive processing is not well modeled by classical notions of representation and process that have dominated psychology and artificial intelligence since the cognitive revolution. In response to this claim, the concept of a mediating state is developed. Mediating states are the class of information-carrying internal states used by cognitive systems, and as such are accepted even by those researchers who reject representations. The debate over representation, then, is actually one about what additional properties of mediating states are necessary for explaining cognitive processing. Five properties that can be added to mediating states are examined for their importance in cognitive models.

Keywords

compositionality, computation, connectionism, discrete states, dynamic Systems, explanation, information, meaning, mediating states, representation, rules, semantic Content symbols

I. INTRODUCTION

1. Since the cognitive revolution, representations have been a critical explanatory tool in cognitive science. Virtually all theories about cognition are based on hypotheses that posit mental representations as carriers of information about the environment of the organism or agent. Recently, however, researchers have argued that the value of representations in cognitive science has been exaggerated (e.g., Brooks, 1991; Thelen & Smith, 1994; van Gelder & Port, 1995). Many of these researchers have argued that we should eliminate representations from cognitive models and focus instead on the relationship between the cognitive system and the environment or on the sub-representational dynamics of cognitive systems. These views take representation to be at best an emergent entity from more basic dynamics, and at worst a construct that has stunted the growth of cognitive science.

2. Reviewing the literature reveals that objections to representation made by different researchers are aimed at different properties of representations used in specific psychological theories or artificial intelligence (AI) programs. Thus, a defense of representation turns into a defense of particular representations used in various theories, and hence has to be done on a case-by-case basis.

3. The initial goal of this target article was to provide a defense of this type, but it quickly became clear that many of the potentially objectionable properties of representations were not necessary for something to be used as a representation in a cognitive model. In particular, a property that was objectionable as part of a representation in one context was not problematic in another. Indeed properties that were problematic in some contexts were actually essential in others.

4. This insight suggested two central issues that a defense of representation must deal with. First, a defense of representation must explicate the core notion of representation that seems to be common to most approaches to cognitive processing. This part of the defense must provide the philosophical foundation for the use of representation in cognitive models. Second, a defense of representation must examine how the concept is used in cognitive models in practice. This part of the defense must examine the pragmatic issues inherent in the use of representation. These joint aims are the focus of this target article.

5. We begin by explicating the notion of a "mediating state." Mediating states are internal states of a cognitive system that carry information about the environment external to the system and are used by the system in cognitive processing. Mediating states form a common ground in the study of cognition in that all cognitive theories posit the existence of mediating states. Many theorists who reject the notion of representation embrace mediating states but suggest that the mediating states they use are not actual representations.

6. After introducing the concept, we will suggest five additional properties of mediating states that have been considered to be important parts of representations in some systems: (1) being enduring, (2) being discrete (and therefore composable), (3) having compositional structure, (4) being abstract, and (5) being rule-governed. These properties are central to the pragmatic goal of constructing explanations of cognitive processing. We will consider the consequences of eliminating these properties from cognitive models.

II. THE CONCEPT OF A MEDIATING STATE

7. In this section, we define the core construct of a mediating state. This construct provides a structure for considering additional properties of representations. At the outset, we must be clear that we are not providing a theory of representational CONTENT or how representations come to have content. Instead, we are defending the use of mental representations in psychological theories and computer models. There are a number of difficult and important issues that must be addressed in order to understand how representations come to have content, such as the symbol grounding problem (Harnad, 1990) and the encoding problem (Bickhard and Terveen, 1995), but they are beyond the scope of this paper.

8. We define a "system" as anything that uses information in an attempt to satisfy its goals. Systems, on this definition, have feedback loops (at least negative ones) because they need to determine whether or not the goals are satisfied. The goals of the system might only be implicit, i.e., not explicitly represented anywhere in the system. A thermostat controlling a heater is a good example of our definition of a system. Note that the goals of the thermostat system are not in any sense known to the thermostat. Note further, that systems are capable of making errors but not necessarily of correcting them. For example, one can hold a match under a thermostat and get it to behave as if the room temperature were quite high, though the temperature in the room might be below freezing.

9. All systems, on our definition, have and use information. We use Dretske's (1981) concept of information modified for use in psychological explanation. Dretske's concept is, in turn, a modified version of Shannon's (1948) quantitative notion of information. For Shannon, information is measured as the average amount of data arriving at a receiver, generated by a source, and transmitted across a channel from the source to the receiver. The problem with this idea is that it provides no way of considering the informational content of a specific signal. Instead, it considers only the amount of information, averaged over all possible signals. Dretske altered Shannon's definition to permit the informational content of a specific transmitted signal from source to receiver to be considered. He defined information in terms of conditional probabilities: an event, e, at a receiver carries the information that something, s, has property P, P(s), if and only if the conditional probability of P(s) given e is 1. (The same conditional probability can also be used to talk about the signal, r, causing e as the carrier of the information that P(s).) We consider the receiver to be the system being studied and the information channel (the channel over which a signal is transmitted) to be the lawful connection between the energy type(s) the system is sensitive to, the thing giving off (or reflecting) of the energy, and the system's sensory apparatus. Because the system has goals, it can and does affect its environment. These effects are further sources for information transmitted to the receiver (e.g., that the system achieved its goal).

10. The trouble with Dretske's view of information is that it does not distinguish the informational contents of the mental states of a cognitive system from those of other states such as transducer states (e.g., the output of a retina) or index states (e.g., a sunburn). Dretske's notion of information is too broad to be used alone by psychology, but we want to preserve the core idea of an event or signal carrying the information that s is P. Thus, we assume that this concept of information should be constrained by considering only that information relevant to psychological explanations, however construed (i.e., psychological explanations could be computational, as many cognitive scientists assume, or some other type (e.g., dynamic systems). Hence, though information is in some sense prior to explanation metaphysically, we consider explanation to be prior to information epistemologically. So, it is reasonable to let the goals of the psychologist act as a filter for which information in the system is to be considered. (For an example of this worked out in detail for computationalism, see Dietrich, 1990.)

11. On this view, information is always ABOUT something, typically something removed in space and time from the system. For example, in the thermostat-room-heater system, the room temperature covaries with the curvature of the bi-metallic strip. Thus, there is information in the system, which is used to affect changes in the environment external to it. We will call states of information of this type "mediating states."

12. Mediating states are states of a system that carry information (not all system states are information states; some are goal states). We define a mediating state in terms of the following four necessary and jointly sufficient conditions.

    (i) There is some entity with internal states which include goal
    states; we assume that these states undergo changes.

    (ii) There is an environment external to the system which also
    changes states.

    (iii) There is a set of informational relations between states in
    the environment and the states internal to the system. The
    information must flow both ways, from the environment into the
    system, and from the system out to the environment. (In the
    simplest case, this will be a feedback loop, but more complicated
    loops such as plan-act-detect loops are also possible. Note also
    that in the typical case, these informational relations will be
    realized as causal relations, but what is important is the
    information carried by these causal relations, not the causal
    relations themselves.)

    (iv) The system must have internal processes that act on and are
    influenced by the internal states and their changes, among other
    things.  These processes allow the system to satisfy
    system-dependent goals (though, these goals need not be known
    explicitly by the system).

13. This definition of a mediating state is quite general. It is intended to capture something that all cognitive scientists can agree to, namely, that there is INTERNAL information used by organisms or systems that mediates between environmental information coming in and behavior going out (this is the minimal condition that distinguishes cognitive science from behaviorism). Interestingly, most AI systems to date do not use actual mediating states, because the internal states do not actually bear any correspondence to entities outside the system. One prominent exception is Brooks-style situated robots (Brooks 1991), which have rudimentary mediating states that link them to their environment. Nonetheless, the absence of true mediating states has not prevented AI systems from being useful both as tools and as explanatory models. Finally, it is important to emphasize again that the definition of mediating states is not intended to function as a theory of representational content. Such a theory is very much needed in cognitive science, but there is as yet no consensus on the details of such a theory.

14. The critical aspect of mediating states that this definition provides is that they are carriers of content internal to some system. In other words, mediating states are the general class of content-bearing states. Some researchers might want to restrict the label "representation" to a particular subset of mediating states. For example, one may wish to construe representations as only those mediating states with a particular type of content (e.g., propositional), or as states that get their content in a particular way. We will argue at the end of the paper that these two ways of restricting mediating states are not critical for most cognitive explanations but rather have a role in a theory of content.

15. Because cognitive scientists (even anti-representationalists) agree that cognitive systems have internal states of some sort that carry content (i.e., they agree that there are mediating states), our definition provides a starting point from which to begin our discussion of representation. From the point of view of mediating states, there is more agreement than disagreement among representationalists and anti-representationalists. We will argue that disagreements over whether there are representations are in fact better and more usefully understood as different researchers focusing on different aspects of cognition and using different kinds of mediating states to explain what they are observing. These different kinds of mediating states are obtained by adding on the five properties of mediating states. From this perspective, one can see that there is an important but underappreciated diversity among research strategies and representational methodologies; it is very much in the interest of cognitive science to encourage this diversity rather than fighting over specific properties of mediating states.

16. On our view, mediating states capture the core of what is important about representations from an explanatory point of view. The five key properties of mediating states introduced above, either singly or jointly, are essential to explaining many kinds of cognitive processes. All these explanations can proceed using mediating states rather than appealing to representation. Thus, the properties of "genuine representations" (assuming there are such things) that distinguish them from mediating states in general are not relevant to cognitive explanations. Instead, we will argue that the five properties of mediating states are the ones that most anti-representationalists and representationalists care about. Some researchers might want to call mediating states with some or all of the five properties added on representations. If so, that is fine. But no theoretical point turns on this. The debate in cognitive science should not be over whether representations are necessary, but rather over which particular properties of mediating states are necessary to explain particular cognitive processes. Thus, rather than being anti-representationalist or seeking the one true representational formalism that will serve as the basis of all cognitive processes, cognitive science should strive for a diversity of research methodologies that will bring to light explanatorily useful properties of mediating states.

III. FIVE PROPERTIES THAT CAN BE ADDED TO MEDIATING STATES

17. This section examines five properties that many proposals about representation add to mediating states. As discussed above, these properties are: (i) being enduring, (2) being discrete, (3) having compositional structure, (4) being abstract, and (5) being rule-governed. They are the ones that anti-representationalists have argued are not necessary for explanations of cognitive processing.

III.1. ARE THERE ENDURING MEDIATING STATES?

18. Many attacks on traditional representational assumptions have focused on the fluidity of cognitive processing (Thelen & Smith, 1994; van Gelder & Port, 1995). They contrast this fluidity with the rigidity of classical cognitive models. In their place, models are suggested that involve dynamic moment-by-moment changes in the internal states of the system.

19. One prominent example of such a system is Watt's steam engine governor (Thelen & Smith, 1994; van Gelder & Port, 1995). The steam engine governor is a remarkable device designed to keep steam engines from exploding. The governor is attached to a steam pipe. As the pressure in the pipe rises, the governor spins faster causing balls mounted on arms on the either side of the governor to rise. The rising balls cause a valve to close because of mechanical connections between the arms and the valve. The restricted valve decreases the amount of steam flowing, and hence the pressure, which causes the governor to spin more slowly. The slower spin causes the balls on the arms to drop, opening the valve. In this way, a relatively constant pressure inside the governor can be maintained.

20. This example has been used to demonstrate that interesting behavior can be carried out without representation. The power of the example rests on the fact that the mediating states of the governor are not enduring. Instead, the speed of the governor at a given moment is a function of the pressure in the engine at that moment. When the pressure changes, the speed of the governor changes. There is no record of past states of the system. It is suggested that, just as the steam engine governor does not need enduring mediating states, cognitive systems do not need them either.

21. Transient mediating states are not limited to simple mechanical objects like governors or thermostats. The patterns of activation on units in distributed connectionist models are also transient states. When a new pattern of activity arises on a set of units (perhaps on the output units as the response to a new input), the old pattern is gone. Similarly, the current state of a dynamic system is transient, and changes to some new state as the system evolves.

22. Although both kinds of dynamic systems do clearly involve transient changes in the activation of their mediating states, they also require states that endure over longer periods of time. Connectionist models use the weights on the connections between units as a trace of past activity. Of course, these mediating states are highly distributed. No particular weight (or unit) can be identified as a symbol in a representation, although the behavior of a connectionist network can often be examined by looking at properties of the connection matrices, such as their principal component structure. These connection weights are enduring mediating states; without such states, connectionist nets could not learn. In general, dynamic systems must have some enduring energy landscape that determines how a new state can be derived from an existing one. This landscape determines aspects of the behavior of the system, such as the location of attractor states.

23. There is an appealing insight in the view that cognitive systems do not have enduring mediating states, namely, that not all behaviors having mediating states require them to be enduring states of the organism. For example, the classic studies of the gill withdrawal reflex in the sea slug Aplysia have demonstrated that this reflex can be habituated with repeated stimulation. Kandel and his colleagues (Klein, Shapiro, & Kandel, 1980) have found that with repeated stimulation of the gill, the pre-synaptic motor neuron in the circuit involved in the habituation releases less neurotransmitter into the synaptic cleft than it did initially. This decrease in transmitter release appears to be mediated by a blockage of calcium channels in the pre-synaptic neuron. When stimulation of the gill is terminated, the calcium channels become unblocked and the gill withdrawal reflex returns to its original strength. In this system, the amount of transmitter released into the cleft is a mediating state that controls the desired strength of the gill withdrawal reflex, which is translated into an actual strength of the reflex by the post-synaptic motor neuron. This state is not enduring however. With repeated gill stimulation, more calcium channels become blocked, or, conversely, as the habituation stimulus is extinguished, more channels unblock. In either case, the reflex only has a current level of activity. It does not store past states.

24. This discussion demonstrates that some mediating states are transient. Nonetheless, not all mediating states can be transient. Rather, systems that learn and make use of prior behavior have some enduring states that allow the system to react to new situations on the basis of past experience. A steam engine governor or a sea slug gill may operate on the basis of transient states, but enduring mediating states will be required in models of more complex cognitive behaviors.

III.2. ARE THERE DISCRETE MEDIATING STATES?

25. One common assumption in models of representation is that the elements in a representation are discrete and composable. (Discrete states will be referred to as "entities" to emphasize their discreteness. In the literature, such entities are frequently called "symbols," but this term is not used here because it is question-begging.) Discrete entities are elements in many proposals about cognitive representations, ranging from feature-list representations (Chomsky & Halle, 1968; Tversky, 1977) to semantic networks (Anderson, 1983b; Collins & Loftus, 1975) to structured representations and schemas (Norman & Rumelhart, 1975; Schank & Abelson, 1977). Despite the variety of proposals that cognition involves discrete entities, there have been many arguments that such entities fail to capture key aspects of cognitive processing.

26. It might seem that being a discrete composable entity follows directly from being an enduring representational entity. Not all enduring mediating states need to make finite, localizable, and precise contributions to larger states. For example, attractor states in dynamic systems and in iterative connectionist models are enduring ones, but they are not discrete entities (or symbols). Attractor states are not clearly separable from each other by distinct boundaries; hence their semantic interpretations are not precise.

27. The idea that there are no discrete entities in cognitive systems reflects, among other things, the important insight that new cognitive states are never (or almost never) exact duplicates of past ones. They may bear some likeness to past states, but they are not identical. In a distributed connectionist model this insight corresponds to the idea that new states are activation vectors that are similar to (i.e., have a high dot-product with) activation vectors that have appeared in past states. In a dynamic systems model, a cognitive system whose behavior is characterized as a point in a state space may often occupy points in a particular region of state space without actually occupying the same point at different times.

28. Smolensky (1988; 1991) made this point explicitly in his defense of connectionist models. His hypothesis about sub-symbolic representations captures the effects of context on cognitive processing. For example, a discrete symbol for cup captures very little information about cups. Rather, the information about cups that is relevant to a cognitive system changes with context. For thinking about a cup full of coffee, insulating properties of cups are important, and examples of cups that have handles may be highly accessible. For thinking about a ceremonial cup, its materials and design may be more important than its insulating properties. Smolensky argues that the high degree of context sensitivity in cognitive processing militates against discrete entities making up cognitive states.

29. Clark (1993) raises a related question about where discrete entities come from. He suggests that connectionist models might actually be better suited to cognition than classical symbolic processes because their sensitivity to statistical regularities in the input may help them develop robust internal states that still have most of the desirable properties discrete entities are supposed to provide.

30. One suggestion for how context sensitive representations might be acquired was discussed by Landauer and Dumais (1997). They describe a model of the lexicon (Latent Semantic Indexing, LSA) that learns words by looking at higher order correlations between the presence of words in passages and using those correlations to form a high dimensional semantic space. One interesting property of this model is that its performance on vocabulary tests improves both for words seen in the passages presented to it on a given training epoch and for words that were not seen during that training epoch. This improvement on words not seen is due to the general differentiation of the semantic space that occurs as new passages are presented. Despite its excellent performance on vocabulary tests (when trained on encyclopedia articles, LSA performs the TOEFL [Test of English as a Foreign Language] synonyms test at about the level of a foreign speaker of English), it contains no discrete entities corresponding to elements of word meaning.

31. An additional line of research that poses a problem for systems with discrete entities focuses on the metacognitive feelings engendered by cognitive processing (Metcalfe & Shimamura, 1994; Reder & Ritter, 1992). For example, we often have a "feeling of knowing." When we are asked a hard question, we might not be able to access the answer to it, but we may be quite accurate at saying whether or not we would recognize the answer if we saw it. This feeling seems to be based on the overall familiarity of the retrieval cue (Metcalfe, 1993; Reder & Ritter, 1992), as well as on partial information retrieved from memory (Koriat, 1994). Neither of these processes seems to involve access to discrete properties of the items being processed.

32. Despite the evidence for continuous mediating states, there are some good reasons why cognizers should also have mediating states with discrete parts. There is evidence that when people make comparisons among concepts, their commonalities and differences become available (Gentner & Markman, 1997; Markman & Gentner, 1993; Tversky, 1977). For example, when comparing a car and a motorcycle, people find it easy to list commonalities (e.g., both have wheels; both have engines) as well as differences (e.g., cars have four wheels, motorcycles have two wheels; cars have bigger engines than motorcycles). In contrast, models that do not have discrete entities will have difficulty accessing the commonalities and differences between a pair. For example, in a distributed connectionist model, the active mediating state of items consists of a pattern of activity across a set of units. Because this pattern of activity has no discrete parts, the only way the vectors can be compared is through a holistic strategy such as finding the amount of one vector that projects on the other through the dot product operation. A scalar quantity like the dot product loses all information about what aspects of one vector are similar to another, yielding only a degree of similarity. Only when there are discrete entities can there be access to the content of the commonalities and the differences.

33. There arises a similar problem for Landauer and Dumais's high dimensional lexical system, LSA. As discussed above, it is able to do the synonyms test from the TOEFL by finding words near to it in semantic space (in this case by having a high dot product). Its success on this test is offered as evidence of its adequacy as a model of human lexical processing that does not require discrete entities. However, this system would have difficulty with an antonyms test. Antonyms are also words that are highly related to each other, but differ along a salient dimension (e.g., "up" and "down" differ in direction, and "up" and "down" are more similar to each other than either is to "giraffe"). Finding such a salient dimension would require analyzing the parts of the relevant lexical mediating state, and these parts are simply not available in a purely high-dimensional semantic correlation space. Yet they would be available to a system with discrete entities.

34. Another reason discrete entities seem important for cognitive processing comes from studies demonstrating that people can (depending on the circumstance) have a preference for or against exact matches along a dimension. In the study of similarity, Tversky and Gati (1982) found that people tend to give high weight to identity matches (see also Smith, 1989). In opposition to mental space models of mental representation, Tversky and Gati found that pairs of stimuli that could each be described by values on two dimensions were considered more similar when one of the dimensions for each stimulus was an exact match than when both dimensions had similar but not identical values. Interestingly, the opposite result has been found in studies of choice (Kaplan & Medin, 1997; Simonson, 1989). When faced with a choice, people often select an option that is a compromise between extreme values. For example, an ideal diet meal might be one that tastes good and has very few calories. People on a diet given a choice among (1) a meal that tastes good and has many calories, (2) a meal that tastes fair and has a moderate number of calories, and (3) a meal that tastes bad, and has very few calories, are likely to select the middle option, (2), because it forms a compromise between the extremes. In this case, the exact match to an ideal is foregone in favor of partially satisfying multiple active goals. In these examples, the objects have a part identity rather than an overall identity. It is not clear how a system without discrete entities would find pairs with some identical aspects to be so compelling.

35. It is an important insight that cognitive processing is context sensitive. Smolensky's sub-symbolic hypothesis suggests that some contextual factors require the ability to make fine distinctions between concepts that are active at different times. The connectionist vectors he proposes are spatial states, so they are drawn from a continuous domain. However, context sensitivity can also be modeled using mediating states that are discrete, provided they have a small grain-size. Thus, concepts must be represented at both a general level of analysis (e.g., "cup" or "television") and a fine-grained one (e.g., "insulating" or "decorative"), and with perceptual elements that cannot be given good verbal labels. Furthermore, the idea that some cognitive processes give rise to diffuse feelings rather than to specific access to properties of mediating states or structures places constraints on the processes that operate over such structures, but does not require that the structures be non-symbolic and non-discrete.

36. To summarize, not all cognitive processes require mediating states with discrete elements. Dynamic systems and connectionist models that use spatial mediating states are often good models of cognitive behavior. These processes may often be sensitive to context. Nonetheless, the influence of context can also be modeled with discrete mediating states that have a small grain-size. Other processes, such as finding antonyms or making comparisons, seem to require at least some mediating states that are discrete.

III.3. ARE THERE MEDIATING STATES WITH COMPOSITIONAL STRUCTURE?

37. An important observation about cognitive processing is that concepts combine. This ability to form more complex concepts from primitive units is particularly evident in language, where actions are described by the juxtaposition of morphological units that represent objects (typically nouns) with other units that represent relations between those objects (typically verbs). Because we combine concepts freely and easily in this manner, it is often assumed that symbolic representations have a compositional (or "role argument") structure that facilitates combination (e.g., Fodor & McLaughlin, 1990; Fodor & Pylyshyn, 1988).

38. One central problem with cognitive processes that require a role-argument structure is that they require processes that are sensitive to the bindings between predicates and their arguments. Structure-sensitive processes are often much more complex than processes that can operate on non-compositional structures (or states). For example, when a mediating state is spatial, processing involves measuring distance in space (like the dot product in connectionist models). When structures are independent symbols (or sub-symbolic features), then sets of features can be compared using elementary set operations (as in Tversky's 1977 contrast model). However, when structures have bindings, a compositional procedure that is sensitive to those bindings must be created. Often, the processes proposed by cognitive scientists have been quite complex.

39. Consider the act of comparing two structures, for example. One popular model of comparison, Gentner's (1983, 1989) structure-mapping theory, suggests that comparisons seek structurally consistent matches, meaning that the match must obey both parallel connectivity, and one-to-one mapping. In parallel connectivity, for each matching predicate, the arguments to those predicates must also match. One-to-one mapping requires that each element in one structure match at most one element in the other. Thus, the comparison process takes into account the bindings between predicates and their arguments. A number of computational procedures for determining analogical matches have been developed (Falkenhainer, Forbus, & Gentner, 1989; Holyoak & Thagard, 1989; Hummel & Holyoak, 1997; Keane, Ledgeway, & Duff, 1994).

40. While it may be appropriate to assume that some cognitive processes have such complexity, it has been suggested that structure-sensitive processes are inappropriate as models of cognitive development. Indeed, a central problem that Thelen and Smith (1994) raise with the representational view of mind is that it posits representations and processes that seem far more complex than make sense on a developmental account. As one way to address this point, they discuss explanations for Baillargeon's (1987) classic studies demonstrating that infants have object permanence.

41. In the basic task, infants are habituated to an event in which a screen is lowered, and then a car on a track rolls down a ramp to behind the screen and re-emerges on the other side. This task is presented repeatedly, until the infant's looking time to this event subsides. Then, both possible and impossible test events are presented. In the possible event, a block is shown behind the track, the screen lowers, and the car again rolls down the ramp to behind the screen and emerges on the other side. In the impossible event, a block is shown ON the track, the screen lowers, and the car rolls down the ramp to behind the screen and emerges on the other side. Infants show greater looking time to the impossible event than to the possible one, which has been interpreted as a recognition that the block continues to exist behind the screen and should have stopped the progress of the car.

42. In an explanation of this event involving a compositional symbol system, infants would store specific relationships such as that the block was on the track or the block was beside the track, as well as relations such as that the car was on the track. It is critical to this explanation that the child can make a distinction between the consequences of the block being ON the track and the block being BEHIND the track. The process underlying this behavior might be specific to cars and tracks (or perceptual objects of particular types); or it might be general to moving objects and obstructions. Thelen and Smith suggest that this explanation grants too much knowledge to an infant. In particular, they reason that if infants could form a symbolic representation of the scene, then it is not clear why they should require a sequence of habituation trials in order to form their representation. Moreover, if infants have such elaborate knowledge of objects, it is not clear why they should act as if hidden objects did not exist in traditional Piagetian object permanence tasks. Thus, Thelen and Smith suggest that a symbolic, representational account provides a gross description of infants' behavior, but fails to explain the details of the observed behavior.

43. In place of a symbolic model, Thelen and Smith propose a dynamic systems account. They suggest that the infant reacts to regularities detected by the visual system. The infant visual system is assumed to have systems that specify what objects exist in the world and where those objects are located. These outputs form a state space. The impact of habituation is to form an expected trajectory through the state space. Then, during the test events, Thelen and Smith assume, the child dishabituates to the impossible event because its trajectory starts out similar to that of the habituation event but then diverges from it at some point. In contrast, the trajectory of the possible event does not diverge from that of the habituation event, so no dishabituation is observed.

44. This dynamic systems account is intriguing, but we suggest that it cannot explain the infants' behavior without positing a complex compositional structure -- a mediating state with a role-argument structural description. In the highly impoverished form of the "what" and "where" systems in the example, it is not clear what information is supposed to be captured in the visual array. However, even if a complex array of values were sufficient to model the output of these systems, there is no account of why the trajectory divergence caused by having a block ON the track is more surprising than the trajectory divergence caused by having the block BEHIND the track. That is, Thelen and Smith provide no account of how an undifferentiated notion of trajectories in a state space distinguishes between trajectory differences that matter and those that do not. We suggest that infants' behavior in this case must reflect a recognition of the spatial relationships between objects, and that augmenting the dynamical systems view to account for these data will ultimately require the addition of a capacity for storing discrete compositional spatial relations. That is, mediating states with a role-argument structure will be needed.

45. Brief examination of research in visual object recognition suggests that visual mediating states may be profitably characterized as having some components that encode spatial relations between parts. Kosslyn (1994) marshals behavioral, computational and neuropsychological evidence in favor of the hypothesis that there are two different modes that the visual system can use to describe relationships between elements in images. The right hemisphere system describes the visual world in terms of metric aspects, and the left hemisphere system uses qualitative relations between elements to describe the world. Other behavioral and computational evidence that visual object recognition requires attention to relations between parts in images comes from Biederman (1987; Hummel & Biederman, 1992), and Palmer (1977). For example, Biederman (1987) suggests that mediating states denoting objects consist of primitive shapes connected by spatial relations (see also Marr (1982)). As evidence, he demonstrates that the ability to recognize objects in line drawings is disrupted more by eliminating information at the junctions of line segments (which carries information about relations between parts) than by eliminating an equivalent amount of line information between the joints. This work further suggests that the visual array required by Thelen and Smith's explanation of the object permanence studies is likely to involve some relational elements. This interpretation is reinforced by the observation that spatial prepositions refer to spatial relations that are abstracted away from many specific details of objects (e.g., Herskovits, 1986; Landau & Jackendoff, 1993; Regier, 1996)

46. Finally, as discussed at the beginning of this section, compositional structure seems necessary for models of linguistic competence. Many linguists and philosophers have pointed out that people effortlessly distinguish between sentences like "The Giants beat the Jets." and "The Jets beat the Giants." Even connectionist models of phenomena like this make use of structures (e.g., Chalmers 1990, Elman, 1990; Pollack, 1990). We are also able to keep track of others' beliefs when they are explicitly stated. Thus, I may believe that the Giants beat the Jets last week, but that you believe the opposite. A "propositional attitude" like belief requires not only that I be able to encode the elements in the original proposition itself (that the Giants beat the Jets), but that you believe the opposite proposition (so I must be able to represent the meta-proposition). This sort of processing requires admittedly effort (see Keysar, Ginzel, & Bazerman 1995 for limits on this ability) and does not develop immediately (Perner, 1991; Wellman, 1990), but it eventually becomes a significant part of human linguistic competence. It seems unlikely that these abilities could be modeled without mediating states that have a role-argument structure.

47. In sum, one insight underlying the proposal that representations do not have compositional structures is that such representations require significant effort to construct, and also significant effort to process. This complexity seems to go beyond what is required to carry out many cognitive tasks. It would be an overgeneralization, however, to conclude that compositional structures are not needed at all. Tasks as basic as those that demonstrate object permanence in infants and processes like object recognition clearly involve at least rudimentary relations between objects in a domain. A model that has no capacity for role-argument binding cannot explain the complexity of such higher-level cognitive and linguistic processing.

III.4. ARE THERE ABSTRACT MEDIATING STATES?

48. It has been assumed since Aristotle that a key factor separating the cognitive capabilities of humans from those of other species is a capacity for abstract thought. At one level, it is trivially true that mediating states are abstract. The world itself does not enter into our brains and affect behavior. Even sense data are the result of neural transformations of physical stimuli that reach on our sense organs. Hence, the question being raised is more accurately cast as a search for the level of abstraction that characterizes mediating states. The classical assumption has been that the information we store is extremely abstract, and hence that it applies across domains. Indeed, when a logical statement like P -> Q is written, it is assumed that any thinkable thought can play the role of P or Q. It is this assumption that is being called into question by the anti-representationalists.

49. One source of the attack on highly abstract stored information comes from demonstrations that people's performance on logical reasoning tasks is often quite poor. For example, in the classic Wason selection task (Wason & Johnson-Laird, 1972), people are told to assume that they are looking at a set of four cards that all have a number on one side, and a letter on the other, and that they must select the smallest set of cards they would have to turn over in order to test the rule "If there is a vowel on one side of the card, then there is an odd number on the other side." The four cards show an A, 4, 7 and J, respectively. In this task, people appear sensitive to the logical schema called modus ponens (P-> Q, P :: Q), as virtually all people state that the card with the A on it must be turned over. In contrast, people generally seem insensitive to modus tollens (P -> Q, ~Q :: ~P), as few people suggest that the card with the even number must be turned over. Further support for this finding comes from studies of syllogistic reasoning in which people exhibit systematic errors in their ability to identify the valid conclusions that follow from a pair of premises (Johnson-Laird, 1983).

50. Some researchers have tried to explain these errors by appealing to abstract logical rules that differ in their ease of acquisition (Rips, 1994). However, much work has focused on more content-based structures that might be used to solve logical problems. For example, Johnson-Laird and his colleagues (Johnson-Laird, 1983; Johnson-Laird, Byrne, & Tabossi, 1989; Johnson-Laird & Byrne, 1991) have suggested that people solve logical reasoning problems by constructing mental models that contain familiar objects and use inference rules derived from familiar situations. Consistent with this claim, it has been demonstrated that people's performance on logical reasoning tasks like the Wason selection task is much better when the situation is specific and familiar than when it is abstract (for example, humans perform the Wason selection task rather well when the underlying objective is to catch cheaters; Cosmides, 1989). Other researchers too have argued that people's reasoning processes are not abstract, suggesting that people have reasoning schemas that are tied to frequently encountered social and pragmatic situations (Cheng & Holyoak, 1989; Cosmides, 1989). Although debate continues over the exact nature of people's reasoning processes, there is general agreement that content has a strong influence on this process.

51. The content-bound nature of reasoning has led some researchers to assume that the bulk of human reasoning is inseparable from the world. Robots developed by Brooks (1991) embody this assumption. Brooks's robots do not form extensive structures to describe their environments; they only use information that is immediately available and store only transient information as they navigate the world. In psychology, the study of situated action also takes this approach. For example, Hutchins (1995) performed a far-reaching study of navigators aboard naval ships. He argues that the complex task of plotting a course for a ship involves deep cognitive work by (at least some) of the participants, but it also requires extensive use of tools and of shared information processing. No individual has an abstract structure of the entire navigation task.

52. Cognitive linguists have also taken the view that mental structures are not entirely abstract. Langacker (1986) suggests that syntactic structures reflect states of the world. The encoding of prepositions like "above" and "below" are assumed to be tied to structures that encode spatial information rather than simply reflecting abstract structures. The linguistic representations are symbolic, but they are assumed to be symbols that are closely tied to perceptual aspects of the world. This contrasts with the amodal verbal symbols often used in linguistic models. Thus, cognitive linguistics assumes a much closer connection between syntax and semantics than does classical linguistics.

53. Mainstream research in cognitive science has also shifted away from the use of abstract logical forms toward more content-based approaches. In the study of categorization, significant progress has been made by assuming that people store specific episodes rather than abstractions of category structure (Barsalou 1999; Brooks, 1978; Medin & Schaffer, 1978; Nosofsky, 1986). Research on problem solving has demonstrated that people solve new problems by analogy with previously encountered problems rather than on the basis of abstracted solution procedures (Bassok, Chase & Martin, 1998; Novick, 1990; Reed & Bolstad, 1991; Ross, 1984). In AI, the field of case-based reasoning has taken as a fundamental assumption that it is easier to store, retrieve and tweak existing cases than to form abstract rules, derive procedures for recognizing when they should be used and then adapting them to be applied in a new situation (Kolodner, 1993; Schank, Kass, & Riesbeck, 1994).

54. These examples demonstrate that there are unlikely to be general-purpose context-free schemas and processes that are ready to be deployed in whatever domain they are needed. The fact that cognitive processing generally shows strong effects of content means only that most mediating states contain some information about the context in which they were formed. It does not mean that there is no highly abstract information stored in some mediating state somewhere. It is likely that the information within an individual may differ in its degree of abstractness. Some types of inference schemas (like modus ponens) seem so obvious and independent of the domain that we may very well store them as abstract rules (see Rips 1994 for a similar discussion). Other types of inferences seem to rely heavily on the domain. The main question to be answered by cognitive science is how many kinds of mediating states are abstract and how many are concrete, and what level of abstraction is used by different cognitive processes. Currently, the balance seems to favor concreteness for many cognitive processes.

III.5. ARE MEDIATING STATES RULE-GOVERNED?

55. An operator with an antecedent that makes reference to mediating states within the system and a consequent that changes the values of mediating states and helps to control effectors that interact with the world is a "rule.". It has been argued that rules should not be part of explanations of cognitive processing. This is an attack on a view of cognitive science that borrows heavily from classical AI, Piagetian stage theory, and Chomskian linguistics, which are rooted in the belief that behavior is inherently rule-governed.

56. In classical AI reasoning systems, the rules are generally inference schemas or productions (Anderson, 1983b, 1993; Newell, 1990; Pollack, 1994). The rules may also be statistical procedures that are supposed to capture crucial elements of expert reasoning behavior. In AI research on problem solving, a problem is cast as a discrepancy between a beginning and an end state, and problem solvers are assumed to have an array of rules (or operators) that can be applied that reduce this discrepancy (Newell & Simon, 1963). On this view, problem solving is a search through a problem space generated by the application of rules to the current state.

57. A rule-governed approach is also evident in developmental psychology. As Smith and Sera (1992) point out, many developmental theories begin with the adult behavior as the expected end-state and then develop a theory that leads inexorably from some (often chaotic) beginning state to a more ordered approximation of adult competence. Adult behavior is often described in terms of a system of rules and children are then monitored until they seem to be sensitive to the proper set of adult rules. For example, in Piagetian studies of the balance beam, children are given long blocks of various shapes and encouraged to try to balance them on a fulcrum. They are monitored for the development of the correct rule that the downward force of a weight is a function of the weight and the distance of the weight from the fulcrum. In this task, children's behavior is often described as the development of intermediate (and incorrect rules) like "the fulcrum must always be in the center." On this view, developmental milestones consist of the acquisition of particular rules.

58. In many ways, these models of development resemble linguistic models. A central tenet of modern linguistics is that syntactic structure is guided by a highly abstract and universal set of rules determining which sentences are grammatical in a given language. Linguistics is concerned primarily with linguistic competence -- an accurate description of the grammar of a given language. Psychologists who have adopted this framework (and have studied linguistic performance) have assumed that there is some mental representation of these syntactic structures. On this view, sentences of a language are constructed through the application of grammatical rules. Psycholinguistic models posit processes in which rules are applied to linguistic input that allow the structure of the sentence to be determined from its surface form.

59. The use of rules in cognitive models may have a long history, but it is also the source of many anti-representationalist sentiments. A central argument by Thelen and Smith (1994) is that cognitive development does not involve the acquisition of rules. They use the development of locomotor ability as an example. As they point out, very young infants, if their weight is supported externally, exhibit a stepping motion when their feet are stimulated. This ability later seems to disappear, only to re-emerge still later in development. Many theories of locomotion used this description of the behavior of the average child as the basis of theories of motor development. These theories often posit maturational changes that permit the observed behaviors to occur.

60. The actual pattern of individual children's development, however, is more complex. Children supported in water exhibit the same stepping behavior as younger infants supported out of water, leading to the conclusion that increases in the weight of the legs may be causing the observed cessation of stepping behavior. Support for this comes from studies in which leg weights are attached to very young infants, which causes the stepping movements to stop. The fine details of locomotor behavior suggest that children's development is guided not by the acquisition of a small set of rules, but rather by the interaction of multiple physical and neural constraints. Behavior is guided in part by the maturation of brain and tissue. It is also guided by a child's interaction with the outside world. A variety of factors must come together to shape development.

61. These examples provide compelling evidence that rules are not needed in explanations of many cognitive processes. Many systems can be described by rules, but that is not the same thing as using rules to carry out a process. For example, the steam-engine governor has a mediating state (the speed with which the governor spins), and a mechanism that makes use of that state (a combination of arms and levers that closes the valve as the height of the arms increases). The system is not checking the state of memory in order to determine the appropriateness of a rule. Thus, the system is not actually using a rule to carry out its behavior.

62. Although we agree that many cognitive processes do not need rules, it does not follow that rules are not a part of any cognitive system. Indeed, some cognitive processes seem like good candidates for being rule-based systems. For example, there have been no convincing accounts to date that the statistical structure of a child's linguistic input is sufficient to lead them to acquire a grammatical system consistent with their language. Furthermore, there have been some impressive demonstrations of rule-use. For example, Kim, Pinker, Prince, and Prasada (1991) demonstrated that verbs derived from nouns are given a regular past-tense form, even when the noun is the same as a verb that takes an irregular past-tense form. So in describing a baseball game an announcer will say "Johnson flied out to center field his first time up" rather than "Johnson `flew' out..." Furthermore, Marcus, Brinkmann, Clahsen, Wiese, and Pinker (1995) have demonstrated that the way a verb is given its past tense form or a noun its plural form need not be a function of the frequency of that morphological ending or the similarity of the verb or noun to known verbs and nouns in the language. These findings support the view that grammar is mediated by rule-governed processes.

63. We close this section with one technical point about rule-based systems and one methodological point related to it. If the computational hypothesis about the nature of cognition is correct (and it is a hypothesis, not a loose metaphor (Dietrich, 1990; 1994, p. 15)), then it MUST be possible in principle to model cognition using rules, because it is a theorem in computability theory that a rule-based machine can do everything a Turing machine can do. Put another way, if cognition involves the execution of algorithms then, at least in principle, we can model all those algorithms using rule execution.

64. Even if the computational hypothesis is wrong, and cognition is carried out in some non-computational way, it is still reasonable to use rules in cognitive models when rules provide a descriptive language that is both explanatory adequate and easy to use. This use of rules is akin to a programmer's use of high-level programming languages like C or Pascal rather than assembly language. At present, rule-based systems should not be removed as a technique for cognitive explanation when all that has been demonstrated so far is that SOME cognitive processes are not well characterized as being rule-based and that cognitive science often uses rules that are too coarse-grained.

IV. GENERAL DISCUSSION

65. Arguments about the need for representations typically have the form "Some cognitive process C does not require internal states with property X, therefore no cognitive processes require internal states with property X. Furthermore, because X is constitutive of representations in general, it follows that representations as such are not needed." The five properties that can be added to mediating states that we examined in part C have all been the subject of arguments of this type.

66. This argument form is a hasty generalization. It is clear for each of the five properties discussed in part C that they are not needed to explain certain cognitive processes, but they are required to explain others. The term "hasty" is used advisedly here, because a central complaint of anti-representationalists is that cognitive science has not made enough progress in a representational paradigm to warrant continued use of representations. Instead, they argue, cognitive science consists of a collection of nearly independent micro-theories, each supported by a body of data.

67. The arguments in this target article directly address the issue of progress. We suggest that none of the five properties discussed are themselves constitutive of representation, since in each case the property can be removed and yet the central condition on representation remains. That insight was the basis of the definition of a mediating state. Indeed, it appears that there is nothing more to being a representation than being a mediating state. Mediating states not only constitute the general class to which more specific kinds of representations belong; they capture the essence of representation. Instead of debating whether or not representations exist, or what the one true representational formalism is, cognitive science will make more progress by studying which properties of mediating states (i.e., representations) are needed to explain particular classes of cognitive processes. This paper is a defense of representation, because it suggests that all cognitive scientists accept the core properties of representation. Debates over representation in cognitive science are actually debates about what additional properties of representations are necessary to understand cognitive processing. Where the debate over representation goes awry is in assuming that there will be only ONE set of properties that will suffice for all cognitive processes.

68. We hasten to add, however, that crucial questions about representation still remain. Chief among them is the way that representations get their content. We have not suggested an answer to this deep puzzle. However, progress can be made in much of cognitive science without answering this question. Progress on the content question will only be made by using true mediating states in cognitive models. Most existing models do not use actual mediating states, because there is no specific world outside the model to which the states are related. Instead, the developer puts labels on the states that are meant to invoke concepts in the minds of readers. It is possible to learn a lot about the computational characteristics of a class of data structures in this way, but it is not possible to learn about how mediating states come to have content in this way. By leaving aside debates over the existence of representations, cognitive science can focus on the crucial issue of what kinds of representations are used by different cognitive processes and how these representations come to have their content.

REFERENCES

Anderson, J. R. (1983a). The Architecture of Cognition. Cambridge, MA: Harvard University Press.

Anderson, J. R. (1983b). A spreading activation theory of memory. Journal of Verbal Learning and Verbal Behavior, 22, 261-295.

Anderson, J. R. (1993). Rules of the Mind. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Baillargeon, R. (1987). Object permanence in 3.5- and 4.5-month old infants. Developmental Psychology, 23, 655-664.

Barsalou, L. W. (1999) Perceptual symbol systems. Behavioral and Barin Sciences 22 (in press) http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.depue.html

Bassok, M., Chase, V. M., & Martin, S. A. (1998). Adding apples and oranges: Semantic constraints on application of formal rules. Cognitive Psychology, 35(2), 99-134.

Bickhard, M. & Terveen, L. (1995). Foundational Issues in Artificial Intelligence and Cognitive Science. Amsterdam, The Netherlands: North-Holland Elsevier.

Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94(2), 115-147.

Brooks, L. (1978). Non-analytic concept formation and memory for instances. In E. Rosch & B. B. Lloyd (Eds.), Cognition and Categorization (pp. 169-211). Hillsdale, NJ: Lawrence Erlbaum Associates.

Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139-159.

Chalmers, D. J. (1990). Why Fodor and Pylyshyn were wrong: The simplest refutation. Paper presented at the The Proceedings of the Twelfth Annual Conference of the Cognitive Science Society, Cambridge, MA.

Cheng, P. W., & Holyoak, K. J. (1989). On the natural selection of reasoning theories. Cognition, 33, 285-313.

Chomsky, N., & Halle, M. (1991). The sound pattern of English. Cambridge, MA: The MIT Press.

Clark, A. (1993). Associative Engines: Connectionism, concepts, and representational change. Cambridge, MA: The MIT Press.

Collins, A. M., & Loftus, E. F. (1975). A spreading-activation theory of semantic priming. Psychological Review, 82(6), 407-428.

Cosmides, L. (1989). The logic of social exchange: Has natural selection shaped how humans reason? Studies with the Wason selection task. Cognition, 31(3), 187-276.

Dietrich, E. (1990). Computationalism. Social Epistemology, 4(2), 135-154.

Dietrich, E. (1994). Thinking Computers and the Problem of Intentionality. In E. Dietrich, ed., Thinking computers and virtual persons. San Diego: Academic Press.

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, MA: MIT/Bradford.

Elman, J. L. (1990). Finding structure in time. Cognitive Science, 14(2), 179-212.

Falkenhainer, B., Forbus, K. D., & Gentner, D. (1989). The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41(1), 1-63.

Fodor, J., & McLaughlin, B. (1990). Connectionism and the problem of systematicity: Why Smolensky's solution doesn't work. Cognition, 35, 183-204.

Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis. Cognition, 28, 3-71.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7, 155-170.

Gentner, D. (1989). The mechanisms of analogical learning. In S. Vosniadou & A. Ortony (Eds.), Similarity and Analogical Reasoning (pp. 199-241). New York: Cambridge University Press.

Gentner, D., & Markman, A. B. (1997). Structural alignment in analogy and similarity. American Psychologist, 52(1), 45-56.

Harnad, S. (1990). The Symbol grounding problem. Physica D, 42 335-346.

Herskovits, A. (1986). Language and spatial cognition: An interdisciplinary study of the prepositions in English. New York: Cambridge University Press.

Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive Science, 13(3), 295-355.

Hummel, J. E., & Biederman, I. (1992). Dynamic binding in a neural network for shape recognition. Psychological Review, 99(3), 480-517.

Hummel, J. E., & Holyoak, K. J. (1997). Distributed representations of structure: A theory of analogical access and mapping. Psychological Review, 104(3), 427-466.

Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: The MIT Press.

Johnson-Laird, P. N. (1983). Mental Models. New York: Cambridge University Press.

Johnson-Laird, P. N., Byrne, R. M., & Tabossi, P. (1989). Reasoning by model: The case of multiple quantification. Psychological Review, 96(4), 658-673.

Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hillsdale, NJ: Lawrence Erlbaum Associates.

Kaplan, A. S., & Medin, D. L. (1997). The coincidence effect in similarity and choice. Memory and Cognition, 25(4), 570-576.

Keane, M. T., Ledgeway, T., & Duff, S. (1994). Constraints on analogical mapping: A comparison of three models. Cognitive Science, 18, 387-438.

Keysar, B., & Bly, B. (1995). Intuitions of the transparency of idioms: Can one keep a secret by spilling the beans? Journal of Memory and Language, 34(1), 89-109.

Kim, J. J., Pinker, S., Prince, A., & Parasada, S. (1991). Why no mere mortal has ever flown out to center field. Cognitive Science, 15(2), 173-218.

Klein, M., Shapiro, E., & Kandel, E. R. (1980). Synaptic plasticity and the modulation of the Ca++ current. Journal of Experimental Biology, 89, 117-157.

Kolodner, J. (1993). Case-based Reasoning. San Mateo, CA: Morgan Kaufmann Publishers, Inc.

Koriat, A. (1994). Memory's knowledge of its own knowledge: The accessibility account of the feeling of knowing. In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition . Cambridge, MA: The MIT Press.

Kosslyn, S. M. (1994). Image and Brain. Cambridge, MA: The MIT Press.

Landau, B., & Jackendoff, R. (1993). "What" and "where" in spatial language and spatial cognition. Behavioral and Brain Sciences, 16(2), 217-266.

Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato's problem: The Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211-240.

Langacker, R. W. (1986). An introduction to cognitive grammar. Cognitive Science, 10(1), 1-40.

Marcus, G. F., Brinkmann, U., Clahsen, H., Wiese, R., & Pinker, S. (1995). German inflection: The exception that proves the rule. Cognitive Psychology, 29, 189-256.

Markman, A. B., & Gentner, D. (1993). Structural alignment during similarity comparisons. Cognitive Psychology, 25(4), 431-467.

Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification. Psychological Review, 85(3), 207-238.

Metcalfe, J., & Shimamura, A. P. (Eds.). (1994). Metacognition: Knowing about knowing. Cambridge, MA: The MIT Press.

Newell, A. (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press.

Newell, A., & Simon, H. A. (1963). GPS: A program that simulates human thought. In E. A. Feigenbaum & J. Feldman (Eds.), Computers and Thought : R. Oldenbourg KG.

Norman, D. A., & Rumelhart, D. E. (1975). Explorations in Cognition. San Francisco: W.H. Freeman.

Nosofsky, R. M. (1986). Attention, similarity and the identification-categorization relationship. Journal of Experimental Psychology: General, 115(1), 39-57.

Novick, L. R. (1990). Representational transfer in problem solving. Psychological Science, 1(2), 128-132.

Palmer, S. E. (1977). Hierarchical structure in perceptual representations. Cognitive Psychology, 9, 441-474.

Perner, J. F. (1991). Understanding the representational mind. Cambridge, MA: The MIT Press.

Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46(1-2), 77-106.

Pollack, J. L. (1994). Justification and defeat. Artificial Intelligence, 67, 377-407.

Reder, L. M., & Ritter, F. E. (1992). What determines initial feeling of knowing? Familiarity with question terms, not with the answer. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(3), 435-451.

Reed, S. K., & Bolstad, C. A. (1991). Use of examples and procedures in problem solving. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(4), 753-766.

Regier, T. (1996). The human semantic potential. Cambridge, MA: The MIT Press.

Rips, L. J. (1994). The Psychology of Proof: Deductive reasoning in human thinking. Cambridge, MA: The MIT Press.

Ross, B. H. (1984). Remindings and their effects in learning a cognitive skill. Cognitive Psychology, 16, 371-416.

Schank, R. C., & Abelson, R. (1977). Scripts, Plans, Goals and Understanding. Hillsdale, NJ: Lawrence Erlbaum Associates.

Schank, R. C., Kass, A., & Riesbeck, C. K. (1994). Inside case-based explanation. Hillsdale, NJ: Lawrence Erlbaum Associates.

Shannon, C. (1949). The Mathematical Theory of Communication. Urbana, IL: Univ. of Illinois Press.

Simonson, I. (1989). Choice based on reasons: The case of attraction and compromise effects. Journal of Consumer Research, 16, 158-174.

Smith, L. B. (1989). From global similarities to kinds of similarities: The construction of dimensions in development. In S. Vosniadou & A. Ortony (Eds.), Similarity and Analogical Reasoning (pp. 146-178). New York: Cambridge University Press.

Smith, L. B., & Sera, M. D. (1992). A developmental analysis of the polar structure of dimensions. Cognitive Psychology, 24(1), 99-142.

Smolensky, P. (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11(1), 1-74.

Smolensky, P. (1991). Connectionism, constituency, and the language of thought. In B. Loewer & G. Rey (Eds.), Meaning in Mind: Fodor and his critics . Cambridge, MA: Blackwell.

Thelen, E., & Smith, L. B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA: The MIT Press.

Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327-352.

Tversky, A., & Gati, I. (1982). Similarity, separability and the triangle inequality. Psychological Review, 89(2), 123-154.

van Gelder, T., & Port, R. F. (1995). It's about time: An overview of the dynamical approach to cognition. In R. F. Port & T. v. Gelder (Eds.), Mind as Motion . Cambridge, MA: The MIT Press.

Wason, P. C., & Johnson-Laird, P. N. (1972). Psychology of Reasoning Structure and Content. London: Routledge.

Wellman, H. M. (1990). The Child's Theory of Mind. Cambridge, MA: The MIT Press.


Volume: 9 (next, prev) Issue: 48 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: