David J. Bryant (1992) A Spatial Representation System in Humans. Psycoloquy: 3(16) Space (1)

Volume: 3 (next, prev) Issue: 16 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 3(16): A Spatial Representation System in Humans

Target Article on Space

David J. Bryant
Department of Psychology 125 NI
Boston, MA 02115



This target article reviews evidence for the functional equivalence of spatial representations of observed environments and environments described in discourse. It is argued that people possess a spatial representation system that constructs mental spatial models on the basis of perceptual and linguistic information. Evidence for a distinct spatial system is reviewed.


Spacial representation, spacial models, cognitive maps, linguistic structure.


1.1 Space can be understood through perception and language, but are the mental representations of space the same in both cases? In this target article, I will argue that they are. Evidence for this position comes from a number of areas, including mental imagery (see Finke & Shepard, 1986), but I will concentrate on studies of spatial knowledge, where there is ample evidence that people can construct accurate spatial representations of environments conveyed by verbal descriptions (Ehrlich & Johnson-Laird, 1982; Foos, 1980; Franklin, 1991; Franklin & Tversky, 1990; Mani & Johnson-Laird, 1982). Moreover, such representations appear to be equivalent in form and operation to representations of observed environments.

1.2 A number of empirical effects observed in spatial learning studies can be obtained when subjects do not study a map or physical route but instead read a description of an environment. For example, Denis and Cocude (1989) found that mental scanning functions for described maps are the same as those for observed maps. People's representation of distance in a route they have walked (Sadalla & Magel, 1980; Sadalla & Staplin, 1980) or studied in a map (Thorndyke, 1981) is also influenced by the number of turns and points along the path. Franklin (1991) found this to be true when subjects read narratives describing a person travelling a route. When asked to verify route statements (e.g., "From A to B involves going by way of C"), subjects took longer to respond for routes where there was a greater distance and more turns and intervening locations between test locations. This finding implies that described routes are represented in terms of turns and the number of intervening objects as well as distance.

1.3 McNamara (1986) reported that the proximity of objects in an environment influences the extent to which object names prime one another on a verbal recognition test. An item facilitated recognition of another if the two had been close together in the environment, relative to an object that had been further away. Recently, Denis and Zimmer (in press) have observed spatial priming effects in recognition of objects in described maps. Recognition decisions were primed when the preceding test item was an object that was physically close to the target in the described island, but not when it was a far object, replicating McNamara's (1986) findings. The propositional structure of descriptions did not influence recognition response times.

1.4 People's spatial representations of descriptions can be seen to interact with perceptual spatial systems. Easton and Bentzen (1987) compared the performance of sighted and congenitally blind subjects on a finger maze task as subjects verified either spatial or nonspatial statements (control groups performed only the tracing task). Both sighted and blind subjects made more errors and took longer on the tracing task when verifying spatial statements. The performance of the nonspatial and control groups did not differ. Similarly, performing a visuospatial tracking task interferes with a person's ability to form a coherent spatial model from a verbal description (Oakhill & Johnson-Laird, 1984). In both cases, interpreting verbal spatial directions and performing a spatial task appeared to compete for the resources of the same cognitive system, indicating that people represent verbally presented route information in a spatial format.

1.5 Further evidence that spatial descriptions are represented in a spatial format comes from the study of mental models. People generally represent texts in mental models rather than by retaining the linguistic structure of the text (Glenberg, Meyer & Lindem, 1987; Johnson-Laird, 1983; Morrow, Greenspan & Bower, 1987). Mental models preserve physical properties of space such as relative position (Bryant, Tversky & Franklin, 1992; Franklin & Tversky, 1990; Mani & Johnson-Laird, 1982) and relative distance (Glenberg et al., 1987; Morrow et al., 1987). More important, spatial features organize information within the models and determine the accessibility of information from mental models. In one study, Glenberg et al. (1987) found that objects mentioned in a narrative remain foregrounded and available as a function of their spatial proximity to the character of the story. Morrow et al. (1987) replicated this effect, finding that objects earlier described as being in the same room as the character were more accessible to readers than objects in another room. Moreover, subjects' reaction times increased with the distance between the protagonist and the object.

1.6 Franklin and Tversky (1990) observed that people use a kind of spatial model (a "spatial framework") to represent verbal descriptions of a person inside an array of objects. A spatial framework organizes objects and their locations within the frame of reference created by the observer's three body axes (head/feet, front/back, left/right). In addition, the spatial framework renders certain axes more accessible at retrieval, depending on features such as the axis's perceptual and physical asymmetry or relation to gravity. In the general case, objects located toward the head and feet are accessed faster than objects in the front, which are accessed faster than objects in the back, which are accessed faster than objects on the left and right. Similar spatial frameworks are used for descriptions with an external perspective, where the observers views objects in front of them (Bryant et al., 1992).

1.7 Although most research on mental models has focussed on text comprehension, researchers generally believe that mental models are perceptually based (e.g., Glenberg et al., 1987; Johnson-Laird, 1983). Indeed, people have been found to use spatial frameworks like those created for texts to retrieve spatial information about observed scenes (Bryant, 1991). Thus, people create the same sorts of spatial memory representations whether they read about an environment or see it themselves.


2.1 The evidence reviewed suggests that perceptual and linguistic spatial information is represented by the same cognitive system. In this section, I would like to argue that there is a distinct spatial representational system (SRS) that is linked to both perceptual and linguistic systems, but which represents space in a format that is unlike that of either of these systems. The idea that there must be a common representation of perceptual and linguistic inputs is not new. A number of theorists have suggested this in one way or another (e.g., Clark, 1973; Miller & Johnson-Laird, 1976; Jackendoff, 1987; Jackendoff & Landau, 1991; Talmy, 1983). I will first describe my proposal and then discuss how it is similar to and different from these earlier ones.

2.2 I assume that visual and linguistic inputs (as well as other forms of perceptual information not discussed here) are initially analyzed by separate systems that are dedicated to representing a particular form of input and that extract spatial information available in one modality. The visual perceptual system detects objects and their relative direction and distance and determines spatial relations between objects and the observer's body. The language system operates during comprehension to analyze linguistic inputs and extract the meaning of sentences. Following Johnson-Laird (1983), I assume that language comprehension involves at least two stages. One builds a propositional representation of the discourse, perhaps through a bottom-up process involving levels of local and global analysis (e.g., van Dijk & Kintsch, 1983). The second stage involves the construction of a nonpropositional mental model that is based on both the discourse representation and general knowledge that guides inference making. In this discussion, the focus will be on the issue of how the SRS creates spatial models from discourse representations. The lexical characteristics of language must certainly be important because they determine how the discourse representation is formed. Differences in the lexical coding of spatial concepts in language presumably affect the nature of the discourse representation that is built up. However, it is the discourse representation in conjunction with general knowledge and contextual cues that guides the construction of mental spatial models by the SRS.

2.3 The results of perceptual and linguistic analyses provide the necessary information for the spatial representation system but they do not themselves constitute spatial models because information is represented in forms specific to the type of input. Moreover, the goal of the SRS is not to represent strictly what is seen or heard in discourse, but to represent an environment that has structure beyond what can be immediately perceived or described. To represent the locations of objects in the environment, the SRS uses three frames of reference. These reference frames are coordinate systems in which locations can be specified along three dimensions. One is the egocentric frame of reference, defined by the three body axes (head/feet, front/back, left/right). The allocentric frame is composed of orthogonal axes set outside the observer. The axes may be centered on some prominent landmark in the environment or aligned to global features (e.g., cardinal compass directions). Finally, I assume an external reference frame based on the external spatial framework analysis (see Bryant et al., 1992), which is composed of axes based on the body but projected forward in the field of view. Another possible frame of reference is object-centered (see Shepard & Hurwitz, 1984), as is often proposed to play a role in object recognition (e.g., Marr, 1982). The object-centered reference frame is not included here because it is unclear what role this system plays in representing the environment. For example, Farah, Brunn, Wong, Wallace, and Carpenter (1990) found that people do not allocate visual attention to spatial location with respect to object-centered frames of reference.

2.4 Representing the location of an object is accomplished by selecting a frame of reference and determining the object's position along each dimension in the coordinate system. Perceptually, an individual needs to estimate an object's distance and direction relative to the body axes to compute its egocentric coordinates. The object's allocentric location can be determined from its egocentric coordinates (if subjects know their position in allocentric frame) by relatively simple geometric computations involving the rotation and translation of the egocentric space. These calculations are described by Gallistel (1990, pp. 106-110), who also discusses mechanisms of position estimation in various animals.

2.5 Constructing a mental model from discourse requires the same distance and direction information to compute location in a coordinate system. However, discourse rarely specifies space as completely or precisely as perception. Therefore, in constructing a mental model of a described scene one must rely on general knowledge to make inferences and assumptions. For example, descriptions often do not explicitly state distances between objects. However, if one were to read that the book was on the shelf across the room, one could infer a likely (if somewhat arbitrary) distance between the book and observer. Creating a mental model involves two processes -- applying rules to translate spatial information in the propositional discourse representation into coordinate information, and inferring spatial information not available in the discourse representation. A comprehensive theory of procedural semantics is beyond the scope of this paper, but Johnson-Laird (1983, pp. 243-258) outlines general rules and describes a specific model for converting spatial descriptions into mental models. Johnson-Laird's model creates two-dimensional models of descriptions written from an external perspective but it could easily be extended to three dimensions and other perspectives.

2.6 The SRS produces mental spatial models that are memory representations of specific observed or described environments. The locations of objects in space are represented by their location in the coordinate space of the mental model. The model is based on a particular coordinate system, although one can maintain multiple models under some circumstances (Franklin, Tversky & Coon, in press). If multiple models are constructed, only one can be consulted at any one time because of constraints on the capacity of working memory (Mani & Johnson-Laird, 1982). The choice of reference frame is presumably made on the basis of task requirements. An egocentric frame is suited for maintaining a representation of space around oneself. An allocentric frame is useful for representing a fixed environment and for encoding object to object relations. An external frame is useful when one is concerned primarily with a set of objects in the visual field. A spatial representation must contain information about the identity of objects in space, but this information need not be detailed (see Jackendoff & Landau, 1991), and a pointer to object representations may be sufficient.

2.7 Spatial framework effects and related findings (e.g., Hintzman, O'Dell & Arndt, 1981) suggest that the SRS incorporates differential accessibility of objects in one's spatial representation. Within a particular frame of reference, the accessibility of objects is determined not just by spatial properties of the coordinate system, but also by factors that influence the salience of spatial dimensions and how readily one can interact with objects at particular locations. For example, in a spatial framework, one's front is highly salient relative to one's back because body asymmetries direct attention and action in that direction (Bryant et al., 1992). A spatial framework incorporates this aspect of the environment through somewhat faster access to objects to the front. Preliminary evidence also suggests that judgment is more accurate for objects in front than for objects in back or to the side (Franklin, personal communication). In general, the SRS organizes spatial models to make directions with some special perceptual or behavioral status more accessible from memory. For egocentric and external frameworks, perceptual and physical asymmetries of the observer's body axes determine the accessibility of locations (Bryant et al., 1992; Franklin & Tversky, 1990). For the allocentric frame of reference, distance and relative direction have been found to determine accessibility (e.g., McNamara, 1986; Morrow et al., 1987).

2.8 This account, though speculative, makes a number of testable predictions. First, differential access to certain dimensions is a feature of the SRS rather than perceptual or linguistic systems. This implies that such effects should be observed in both perceptual and verbal tasks. Previous research has established differential access in verbal tasks (Bryant et al., 1992; Franklin & Tversky, 1990; Morrow et al., 1987), but the evidence in perceptual tasks is incomplete and somewhat inconsistent. Bryant (1991) found that subjects used spatial frameworks for observed scenes when they accessed objects from memory. However, spatial frameworks were not observed when subjects responded to a currently visible scene. This could reflect the use of lower level visual information to access objects in perceptual display, or a real difference in the representation of perceived versus described space. On the other hand, Hintzman et al. (1981) observed differential access to front/back versus left/right when subjects responded to a visible array of objects. The SRS account also assumes that one can readily integrate different sources of information in a single mental model. This suggests that people should be prone to something like source amnesia for spatial information because the SRS does not record the medium in which location was learned. Suggestive of this is Intraub and Hoffman's (1992) finding that people frequently report memory for pictured scenes they have not viewed but have merely read a description of, even after relatively short retention intervals.


3.1 Several theorists have previously made the general argument that there must be a common representational format for perceptual and linguistic knowledge (e.g., Clark, 1973; Levelt, 1984). For example, Miller and Johnson-Laird (1976) developed a theory in which spatial knowledge can be represented in propositional networks specifying the spatial relations among objects. Jackendoff (1987), and Jackendoff and Landau (1991) have likewise made the same general argument presented here: that perceptual and linguistic inputs are initially analyzed by separate systems through various levels of representation, then translated into a common representation that is modality independent. In particular, Jackendoff (1987) notes similarities between the levels of representation in Marr's (1982) theory of vision and the levels that exist in language comprehension. According to his theory, spatial information is represented at two interacting levels. There is a geometric level, corresponding to 3D visual models (in Marr's terminology), that encodes metric information, and an algebraic level, corresponding to conceptual structures, that encodes categorical relationships. Conceptual structures can be based on categories of spatial relations. Thus, a perceived or described array of objects is represented by 3D visual models that are linked in conceptual structures.

3.2 Although the SRS proposal has much in common with these earlier theories, there is at least one major difference. Miller and Johnson- Laird (1976, pp. 57-58) distinguish between relative spatial representation, in which space is defined by specific relations between objects, and absolute spatial representation, in which space is defined by a coordinate system that is independent of any objects that might be in the space. They argued that people use relative spatial representations, primarily because language relies on relativistic concepts to communicate about space. In Jackendoff's (1987) theory too, space is represented by the particular relations between objects, stored in conceptual structures like propositions, rather than by reference to an explicitly defined coordinate system. The SRS, on the other hand, encodes locations of objects in up to three coordinate systems, depending on the observer's perspective and task. It should be noted that the concept of absolute space is not limited to the allocentric frame of reference; egocentric space can also be absolute. Even though positions of objects do not remain fixed as a person moves or rotates, locations within the egocentric coordinate system remain constant relative to the three axes. Thus, one can represent absolute position in egocentric space independent of any objects that might be in that space.

3.3 The claim that space is represented in absolute coordinate systems is based, in part, on arguments of O'Keefe and Nadel (1978). They present physiological evidence that the mammalian hippocampus is specialized for coding place in absolute allocentric space. Recently, Feigenbaum and Rolls (1991) have recorded individual hippocampal neurons that respond to allocentric position. However, the SRS account also proposes that the body axes can serve as a frame of reference as allocentric coordinates do. Some evidence for this comes from studies by Farah et al. (1990) showing that attention to locations in space is allocated with respect to both allocentric and egocentric frames of reference. In addition, Kesner, Farnsworth, and DiMattia (1989) report evidence that areas in the mammalian frontal cortex are specialized for organizing egocentric cognitive maps. Tamura, Ono, Fukuda, and Nakamura (1990) have found hippocampal neurons in mammals that respond to locations in the egocentric frame of reference.

3.4 The structure of language also reflects the use of egocentric and allocentric reference frames. Miller and Johnson-Laird (1976) and Levelt (1984), among others, have distinguished between the deictic system of spatial reference, which interprets spatial terms relative to one's own perspective, and the intrinsic system, which interprets spatial terms with respect to the orientation of an external referent. These systems map onto egocentric and allocentric coordinate frames respectively. Levelt (1984) discusses limitations on the use of each system in language that could reflect translation rules between a linguistic representation and an underlying spatial representation.

3.5 Miller and Johnson-Laird (1976) and Jackendoff (1987) propose a largely (but not entirely) propositional representation of space. In this format, spatial relations between objects must be explicitly encoded in conceptual structures that describe those relationships. In a coordinate system, objects are encoded with respect to the three spatial axes so that relations between objects are not explicitly represented but are implicit in the structure of the coordinate system and can be derived from it. The distinction between spatial coordinate (often simply referred to as "spatial") and propositional representations is common in the mental-model literature, but the studies described in the first section do not provide unique support for an absolute representation. For example, propositional accounts can be devised to explain the effects of turns and intersections on route knowledge.

3.6 There are several sources of evidence against propositional representation of space. We can rule out the possibility that readers simply encode the verbatim propositional structure of texts because most studies use materials in which the text structure does not match the spatial organization of the described scenes. Also, the propositional structures of texts do not predict retrieval and priming effects in spatial texts (Denis & Zimmer, in press). In addition to this, however, there are reasons to reject abstract nonverbatim propositional representations. First, Ehrlich and Johnson-Laird (1982) argued that during comprehension it is difficult to link new propositions in a propositional structure unless they refer to entities in the immediately preceding proposition. They found, however, that readers could easily encode locations of new objects in a spatial model even when locative sentences did not refer to objects in the immediately preceding sentence. This is consistent with the idea that subjects located objects in a coordinate frame rather than a propositional structure (see also, Denis & Denhiere, 1990). Second, Mani and Johnson-Laird (1982) pointed out that indeterminate spatial relations can easily be encoded in propositions (e.g., if A is left of B and C is left of B, the relation of A to C is indeterminate). This example, however, requires two spatial models, one in which A is left of C and another in which C is left of A. Mani and Johnson-Laird (1982) predicted and found that memory is worse for spatial descriptions containing indeterminacies than completely determinate descriptions. This is inconsistent with a propositional representation because indeterminacy should not impose greater demands on a propositional memory.

3.7 Finally, people generally have the same degree of access to inferred spatial relations as to explicitly described or observed ones (e.g., Bryant et al., 1992, Experiment 1; Taylor & Tversky, 1992). This is consistent with a coordinate representation because all spatial relations are implicitly encoded in the spatial structure of the coordinate system itself. These findings do not rule out a propositional representation but they require the assumption that all inferences are explicitly encoded, during either comprehension or retrieval.

3.8 It should be noted that a number of theories propose that linguistic information is represented in a nonpropositional form (e.g., Lakoff, 1987; Langacker, 1987). One theory of spatial representation similar to the SRS account is that of Talmy (1983), who proposes that linguistic descriptions of space are represented in terms of spatial schemas. Schemas are abstract spatial concepts embodied by individual spatial expressions, such as prepositions. They represent basic classes of spatial relations, including geometric relations and perspective. These schemas are nonpropositional; they are themselves composed of perceptual representations of basic elements such as points and planes. Discourse contacts particular schemas and forms abstract spatial representations that are generalized from the way the perceptual system structures space. In many respects, Talmy's theory is similar to the SRS account. It proposes a common spatial representation for perceived and described environments. Also, we must assume that the SRS uses some sort of schematization procedure to deal with incomplete spatial information in discourse. However, Talmy's theory, like others, relies on relativistic concepts of space, whereas the SRS represents space in terms of location in a coordinate system.


4.1 According to the SRS account, the representation of space is distinct from other forms of conceptual and perceptual knowledge. Distinct spatial memory systems have previously been proposed for animals (Gallistel, 1990) and humans (Pezdek, Roman & Sobolik, 1986; Salthouse, 1974). In the animal literature, Gallistel (1990), among others, has identified specific behavioral mechanisms for locating objects, for dead reckoning, and for constructing cognitive maps in many species. Such spatial mechanisms, so prevalent in mammalian species, probably form the basis of human spatial cognition.

4.2 The argument for a separate spatial representation system depends on evidence that spatial information is processed differently and separately from other forms of information. One example is the claim that location is automatically encoded in memory, whereas other forms of information are not. Evidence comes from experiments in which location memory is found to be uninfluenced by instructions to learn location or by developmental changes with age. In one study, Mandler, Seegmiller & Day (1977) found that subjects recalled locations of objects in a matrix equally well under incidental and intentional learning conditions, suggesting that locations were encoded automatically. Also, sixth graders and adults performed at the same level on the location recall test. These effects have been replicated by others (see Naveh-Benjamin, 1987, for a review).

4.3 It has been argued that other studies have not used "true" incidental learning conditions. Naveh-Benjamin (1987) found that location recall was better under intentional than incidental study conditions when subjects expected no memory test of any kind. The nature of the stimulus materials also seems to influence the automaticity of spatial encoding. For example, Park and Mason (1982) have found that memory for locations of pictures is equally good under incidental and intentional instructions and better than memory for location of words under incidental learning conditions. On the other hand, when instructed to learn the locations of words, subjects are as good at recalling locations of words as pictures. Recently, Ellis (1990) has suggested that location memory tasks used in previous studies involved some effortful subtasks, such as discriminating spatial relations between items, and other automatic subtasks. In a task that involved only learning item location, Ellis (1990) found no advantage of intentional over incidental learning. This finding suggests that certain subcomponents of spatial memory exhibit different dynamics than verbal memory.

4.4 Another concern in the interpretation of these results is potential ceiling effects in spatial tasks. Subjects' spatial memory performance level in Park and Mason (1982) was indeed very high (ranging from .83 to .90 in some conditions). A ceiling effect could mask effects of intention, age, etc. However, the mean scores of Mandler et al. (1977) were within the midrange in all conditions and the results of Park and Mason (1982) have been confirmed by Pezdek (1983) in an experiment free of ceiling effects. The data of Pezdek et al. (1986) discussed below also do not approach the ceiling in any condition.

4.5 Pezdek et al. (1986) have documented several other factors that have qualitatively different effects on verbal and spatial memory. Instructions to form visual images when studying words improved recall performance but not location memory. Also, recall of objects and words decreased with longer retention intervals but location memory was largely unaffected by retention interval (for words, but not for object names). The fact that these factors had different effects on identity versus location memory provides further evidence that the two memory tasks were performed by separate systems.

4.6 The findings reviewed above are consistent with the hypothesis that separate systems govern spatial and verbal memory; yet they do not demand this interpretation, because single process models can be devised that also account for functional dissociations (Dunn & Kirsner, 1988). However, such single process models require that performance on one task be a monotonic function of performance on the other. The conclusion that spatial memory and identity memory rely on different system is strengthened by the fact that in at least two of the studies reviewed (Mandler et al., 1977, Experiment 2; Pezdek et al., 1986) such a monotonic relationship is not present. In addition, evidence has been presented that distinct regions of the brain are involved in coding location and identity (Farah, Hammond, Levine & Calvanio, 1988; O'Keefe & Nadel, 1978).

4.7 Dissociations between spatial and verbal memory do not contradict the earlier evidence that spatial information conveyed in verbal materials can be extracted by the SRS and encoded in a spatial form. The form of stimulus material is not critical; rather, the nature of the information to be retained determines how information is represented. Thus, in Easton and Bentzen's (1987) study, subjects' performance on a spatial maze task was impaired by verbal questions dealing with spatial relations because the questions required processing by the spatial as well as verbal system. Salthouse (1974) has also found that concurrent verbal and spatial tasks do not interfere with one another when the verbal task does not involve processing spatial information.


5.1 People create the same sorts of cognitive maps and mental spatial models from verbal descriptions and direct observations. This suggests that people have a distinct spatial representation system that creates spatial models from disparate sources of input and is independent of memory systems for other domains of knowledge. The primary role of the SRS is to organize spatial information in a general form that can be accessed by either perceptual or linguistic mechanisms. The SRS provides the coordinate frameworks in which to locate objects, thus creating a model of a perceived or described environment. The advantage of a coordinate representation is that it is directly analogous to the structure of real space and captures all possible relations between objects encoded in the coordinate space. These frameworks also reflect differences in the salience of objects and locations in accord with properties of the environment (e.g., distance or gravity) and the ways in which people interact with it (perspective or posture). Thus, the SRS creates representations that are models of the physical and functional aspects of the environment.


Bryant, D. J. (1991) Perceptual characteristics of mental spatial models. Unpublished doctoral dissertation, Stanford University.

Bryant, D. J., Tversky, B. & Franklin, N. (1992). Internal and external spatial frameworks for representing described scenes. Journal of Memory and Language 31: 74-98.

Clark, H. H. (1973). Space, time, semantics and the child. In T. E. Moore (Ed.), Cognitive development and the acquisition of language. New York: Academic.

Denis, M. & Cocude, M. (1989). Scanning visual images generated from verbal descriptions. European Journal of Cognitive Psychology 1: 293-307.

Denis, M. & Denhiere, G. (1990). Comprehension and recall of spatial descriptions. European Bulletin of Cognitive Psychology 10: 115- 143.

Denis, M. & Zimmer, H. D. (in press). Analog properties of cognitive maps constructed from verbal descriptions. Psychological Research.

Dunn, J. C. & Kirsner, K. (1988). Discovering functionally independent mental processes: The principle of reversed association. Psychological Review 95: 91-101.

Easton, R. D. & Bentzen, B. L. (1987). Memory for verbally presented routes: A comparison of strategies used by blind and sighted people. Journal of Visual Impairment and Blindness 81: 100-105.

Ehrlich, K. & Johnson-Laird, P. N. (1982). Spatial descriptions and referential continuity. Journal of Verbal Learning and Verbal Behavior 21: 296-306.

Ellis, N. R. (1990). Is memory for spatial location automatically encoded? Memory & Cognition 18: 584-592.

Farah, M. J., Brunn, J. L., Wong, A. B., Wallace, M. A. & Carpenter, P. A. (1990). Frames of reference for allocating attention to space: Evidence from the neglect syndrome. Neuropsychologia 28: 335- 347.

Farah, M. J., Hammond, K. M., Levine, D. L. & Calvanio, R. (1988). Visual and spatial mental imagery: Dissociable systems of representation. Cognitive Psychology 20: 439-462.

Feigenbaum, J. D. & Rolls, E. T. (1991). Allocentric and egocentric spatial information processing in the hippocampal formation of the behaving primate. Psychobiology 19: 21-40.

Finke, R. A. & Shepard, R. N. (1986). Visual functions of mental imagery. In K. R. Boff, L. Kaufman & J. P. Thomas (Eds.), Handbook of perception and performance: Volume II, cognitive processes and performance. New York: Wiley.

Foos, P. W. (1980). Constructing cognitive maps from sentences. Journal of Experimental Psychology: Human Learning and Memory 6: 25-38.

Franklin, N. (1991) Representation of spatial information in described routes: Distance, turns, and objects. Unpublished manuscript, State University of New York, Stony Brook.

Franklin, N. & Tversky, B. (1990). Searching imagined environments. Journal of Experimental Psychology: General 119: 63-76.

Franklin, N.,Tversky, B. & Coon, V. (in press). Switching points of view in spatial mental models acquired from text. Memory & Cognition.

Gallistel, C. R. (1990). The organization of learning. Cambridge, MA: MIT Press.

Glenberg, A. M.,Meyer, M. & Lindem, K. (1987). Mental models contribute to foregrounding during text comprehension. Journal of Memory and Language 26: 69-83.

Hintzman, D. L.,O'Dell, C. S. & Arndt, D. R. (1981). Orientation in cognitive maps. Cognitive Psychology 13: 149-206.

Intraub, H. & Hoffman, J. E. (1992). Reading and visual memory: Remembering scenes that were never seen. American Journal of Psychology 105: 101-114.

Jackendoff, R. (1987). Consciousness and the computational mind. Cambridge, MA: MIT Press.

Jackendoff, R. & Landau, B. (1991). Spatial language and spatial cognition. In D. J. Napoli & J. A. Kegl (Eds.), Bridges between psychology and linguistics: A Swarthmore festschrift for Lila Gleitman. Hillsdale, NJ: Lawrence Erlbaum Associates.

Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, Massachusetts: Harvard University Press.

Kesner, R. P., Farnsworth, G. & DiMattia, B. V. (1989). Double dissociation of egocentric and allocentric space following medial prefrontal and parietal cortex lesions in the rat. Behavioral Neuroscience 103: 956-961.

Lakoff, G. (1987). Women, fire, and dangerous things. Chicago: University of Chicago Press.

Langacker, R. W. (1987). Foundations of cognitive grammar. Stanford: Stanford University Press.

Levelt, W. J. M. (1984). Some perceptual limitations on talking about space. In A. J. van Doorn, W. A. van de Grind & J. J. Koenderink (Eds.), Limits in perception. The Netherlands: Utrecht.

Mandler, J. M., Seegmiller, O. & Day, J. (1977). On the coding of spatial information. Memory & Cognition 5: 10-16.

Mani, K. & Johnson-Laird, P. N. (1982). The mental representation of spatial descriptions. Memory & Cognition 10: 181-187.

Marr, D. (1982). Vision: A computational investigation in the human representation of visual information. San Francisco: Freeman.

McNamara, T. P. (1986). Mental representations of spatial representations. Cognitive Psychology 18: 87-121.

Miller, G. A. & Johnson-Laird, P. N. (1976). Language and perception. Cambridge, MA: Harvard University Press.

Morrow, D. G.,Greenspan, S. L. & Bower, G. H. (1987). Accessibility and situation models in narrative comprehension. Journal of Memory and Language 26: 165-187.

Naveh-Benjamin, M. (1987). Coding of spatial location information: An automatic process? Journal of Experimental Psychology: Learning, Memory, and Cognition 13: 595-605.

Oakhill, J. V. & Johnson-Laird, P. N. (1984). Representation of spatial descriptions in working memory. Current Psychological Research & Reviews 3: 52-62.

O'Keefe, J. & Nadel, L. (1978). The hippocampus as a cognitive map. Clarendon Press: Oxford.

Park, D. C. & Mason, D. (1982). Is there evidence for automatic processing of spatial and color attributes present in matched pictures and words? Memory & Cognition 10: 76-81.

Pezdek, K. (1983). Memory for items and their spatial locations by young and elderly adults. Developmental Psychology 19: 895-900.

Pezdek, K.,Roman, Z. & Sobolik, K. G. (1986). Spatial memory for objects and words. Journal of Experimental Psychology: Learning, Memory, and Cognition 12: 530-537.

Sadalla, E. K. & Magel, S. G. (1980). The perception of traversed distance. Environment and Behavior 12: 65-79.

Sadalla, E. K. & Staplin, L. G. (1980). The perception of traversed distance: Intersections. Environment and Behavior 12: 167-182.

Salthouse, T. A. (1974). Using selective interference to investigate spatial memory representations. Memory & Cognition 2: 749-757.

Shepard, R. N. & Hurwitz, S. (1984). Upward direction, mental rotation, and discrimination of left and right turns in maps. Cognition 18: 161-193.

Talmy, L. (1983). How language structures space. In H. L. J. Pick & L. P. Acredolo (Eds.), Spatial orientation: Theory, research, and applications New York: Plenum Press.

Tamura, R., Ono, T., Fukuda, M. & Nakamura, K. (1990). Recognition of egocentric and allocentric visual and auditory space by neurons in the hippocampus of monkeys. Neuroscience Letters 109: 293-298.

Taylor, H. A. & Tversky, B. (1992). Spatial mental models derived from survey and route descriptions. Journal of Memory and Language 31: 261-292.

Thorndyke, P. W. (1981). Distance estimation from cognitive maps. Cognitive Psychology 13: 526-550.

van Dijk, T. A. & Kintsch, W. (1983). Strategies of discourse comprehension. New York: Academic Press.

Volume: 3 (next, prev) Issue: 16 (next, prev) Article: 1 (next prev first) Alternate versions: ASCII Summary