I agree with Zwaan and Graesser (1993) that "there is no empirical evidence that some inferences are automatically or partially encoded in text comprehension." They believe I disagree on the second point, about partial encoding, I clarify my stance.
1.1 I agree with Zwaan and Graesser (1993: title) that "there is no empirical evidence that some inferences are automatically or partially encoded in text comprehension." They recognize that I agree with them on the first point, about automaticity, but they believe I disagree on the second, about partial encoding.
2.1 Zwaan and Graesser point out that my arguments against automaticity (Garnham, 1992: 2.3) "could be strengthened by taking a more theoretical perspective on automaticity" (1993: 1.1). Indeed, it was only lack of space, and the knowledge that Glenberg and Mathew (1992) had commented extensively on the notion of automaticity, that prevented me from developing this point. My comment that if McKoon & Ratcliff (M&R) (1992) were right about automaticity nonminimal inferences "should be distinguishable from minimal inferences using the normal criteria for differentiating automatic and strategic processing" (1992: 2.3) was intended as an allusion to just the kind of theoretical perspective that Zwaan and Graesser have in mind (e.g., that of Shiffrin & Schneider, 1977). I also agree with Zwaan and Graesser (1993: 2.3) that the question of whether inferences are drawn automatically is, in principle, an empirical one, although I am less sanguine than they that the issue is easily resolvable using the kind of dual task methodology they hint at.
3.1 On the notion of partly encoded inferences, Zwaan and Graesser have overinterpreted my endorsement (Garnham, 1992: 6.2) of M&R's notion of partly encoded inferences and, hence, overestimated my agreement with M&R. Mine was intended only as a general endorsement of the attempt to "[break] away from an oversimplistic view of inference making"; I was not endorsing the particular way M&R have tried to do so. In fact, I do not have a notion of partly encoded inferences but, rather, one in which the processes that contribute to an inference occur at different times. M&R's notion of inferences being encoded at different strengths rests, in my view, on their failing to make a clear distinction between a methodology (priming) and the phenomenon it is used to investigate (inference making). More generally, it rests on the failure, which I was only able to hint at in my target article (Garnham, 1992: 6.5), to draw a distinction between a computational theory of text comprehension and an account of the representations and processes used in inference making.
3.2.0 COMPONENTS OF INFERENCE MAKING
3.2.1 I will attempt to clarify my own position before turning to problems with the notion of partly encoded inferences. Consider the standard mental-models account of a spatial inference from, say, "the spider is to the left of the caterpillar" and "the caterpillar is to the left of the ant" to "the spider is to the left of the ant." The theory claims that, at least under some circumstances, the first two sentences will be encoded into a mental analogue of the three-item "array" the sentences describe. Does the setting up of the array constitute the making of the inference about the spatial relation between the spider and the ant? That relation is, in a sense, encoded into the array. But if the information about the spider being to the left of the ant is to guide behavior, it must be read out of the array. The processes that do this reading out are by no means trivial. So it could be argued that the inference is not completed until this reading out occurs. The question is not about when inferences are first partially encoded and when they are encoded more strongly (Zwaan & Graesser, 1993: 3.2), but about when the component processes of inference making occur. Because there can be several component processes, and because they may be invoked at different times, it can be misleading to say that the inference is made at a particular time.
3.2.2 It should be obvious from this characterization that I by no means want "to throw the baby out with the bathwater" (Zwaan & Graesser, 1993: 1.2, see also 3.1) and dispense with online studies of inference making. Indeed, the identification of component processes of inference making increases the complexity of the questions that have to be addressed in online studies. Which processes occur immediately? When do the others occur? Which, if any, are automatic, in the technical sense? How do we find evidence for particular processes having taken place?
3.3.0 INFERENCES ENCODED WITH DIFFERENT STRENGTHS
3.3.1 M&R's notion of a partially encoded inference is quite different from the one I have just sketched. The idea is not based on an analysis of inference making into component parts. It focuses on the final product of the inference making process, conceived as an inferred proposition, or set of propositions (e.g. "the spider is to the left of the ant"), and it assumes that the different strengths at which it can be encoded reflect different degrees to which the inference has been made.
3.3.2 M&R's notion of degree of encoding has, in fact, two components: specificity of information inferred and strength of memory trace (see e.g. 1990: 316-320). However, it is misleading to refer to a less specific inference as a partial encoding of a more specific one. Someone reading "the container held the cola" may infer that the container is either a glass or a bottle or a can or..., rather than simply inferring that it is a bottle (see also Gumenik, 1979, and cf. Anderson & Ortony, 1975). However, in this case, the more specific inference is not weakly encoded; it is simply not warranted, and, on Gumenik's evidence, it is not encoded at all. M&R are right to distinguish specificity from strength and to note that inferences vary in their specificity, but they are wrong to characterize less specific inferences as weak encodings of more specific ones.
3.3.3 M&R support their claims about strength of encoding primarily with data from priming experiments. In the simplest case, speeded response (or false positive response in a probe task) to "spoon" after reading "John stirred his coffee" would be taken as evidence for the inference "John used a spoon to stir his coffee," and no speeding of the response (or no increase in false positives), relative to a control condition, would be taken as evidence that the inference had not been made. Under different circumstances, the response to "spoon" might be speeded (or the false positives increased) by different amounts. This pattern of findings might suggest that the inference is encoded at different strengths.
3.3.4 The idea that information in memory is more or less available is a truism. However, there are problems in interpreting data from priming experiments in the way M&R do. First, although it is not always apparent in an experimental situation, inferences from real texts lead to the encoding of information about particular people, particular things, particular places, particular times, and so on. To a subject in an experiment, a sentence such as "John stirred his coffee" has little significance, but if it were used to describe a situation in the world, it would be about a particular person called John and a particular coffee he had at a particular time. Very often, such a sentence would be the reader's only source of information about the event it described. So if information about the (probable) use of a spoon in this particular event is encoded, it requires the addition of information into the mental model of the event by constructive processes. And although this part of the model is constructed from elements that already exist in memory -- information about spoons in general, for example, and their use in stirring coffee -- activation of these elements does not, in itself, count as partial encoding of the inference. The inference has not been encoded at all until at least a start has been made on constructing the appropriate part of the mental model of the specific situation (John's use of the spoon). Once this part of the model has been constructed, it may be more or less readily available, but even if it is comparatively unavailable, because its memory trace is weak, the inference has still been made. More generally, speeded or false positive response to an inference-related word in a priming experiment does not necessarily indicate that the inference has been made. Neither need the making of an inference be reflected in the "priming" of particular words.
3.3.5 The simplest case in which priming must not be confused with inference making is when the probe word is associatively or semantically related to a word in the text that supports the inference. This problem is widely recognized, however, and almost invariably controlled for. Another possibility, which I have hinted at above, is that explicit information about a particular person (John) engaging in a particular act of coffee stirring makes available to the reader general knowledge about coffee drinking. This knowledge includes information about things (sugar and milk or cream) being added to coffee and the subsequent need for the coffee to be stirred. It also makes available information about spoons being the typical instruments for stirring coffee. So information about spoons could become active, and produce priming for "spoon," before or without the encoding of the particular information that, in the particular act of coffee stirring being described, a spoon was used. There is indeed some evidence that the activation of knowledge structures can prime words linked with them (e.g. Sharkey & Mitchell, 1985). Amount of priming could, therefore, be a reflection of how strongly the relevant background knowledge is activated, and not of the encoding of an inference.
3.3.6 Conversely, making an inference need not result in the "priming" of a probe word. Priming presumably arises from the activation of either an entry in the mental lexicon or a concept. The inference that John used a spoon to stir his coffee results in a representation of a particular object (the spoon in question) and its role in a particular event. The relation between the representation of an object in a mental model and the activation of the lexical entries of words that could be used to describe that object, or of general concepts under which the object falls, is not well understood. It is not impossible, however, that the activation, if any, would not be detected in a priming experiment.
Anderson, R.C. & Ortony, A. (1975). On putting apples into bottles: A problem of polysemy. Cognitive Psychology, 7, 167-180.
Garnham, A. (1992). Minimalism versus constructionism: A false dichotomy in theories of inference in reading. PSYCOLOQUY 3(63) reading-inference-1.1.
Glenberg, A.M. & Mathew, S. (1992). When minimalism is not enough: Mental models in reading comprehension. PSYCOLOQUY 3(64) reading-inference-2.1.
Gumenik, W.E. (1979). The advantage of specific terms over general terms as cues for sentence recall: Instantiation or retrieval? Memory and Cognition, 7, 240-244.
McKoon, G. & Ratcliff, R. (1990). Dimensions of inference. In A.C. Graesser & G.H. Bower (Eds.), Inference and text comprehension (The psychology of learning and motivation, Vol. 25, pp. 313-328). San Diego: Academic Press
McKoon, G. & Ratcliff, R. (1992). Inference during reading. Psychological Review, 99, 440-466.
Shiffrin, R.M. & Schneider, W. (1977). Controlled and automatic human information processing: II. perceptual learning, automatic attending, and a general theory. Psychological Review, 84, 127-190.
Sharkey, N.E. & Mitchell, D.C. (1985). Word recognition in a functional context: The use of scripts in reading. Journal of Memory and Language, 24, 253-270.
Zwaan, R.A. & Graesser, A.C. (1993). There is no empirical evidence that some inferences are automatically or partially encoded in text comprehension. PSYCOLOQUY 4(5) reading-inference.6.
AUTHOR NOTE: My work on mental models has been supported by ESRC grant C 0023 2439 "Mental models and the interpretation of anaphora. Thanks to Jane Oakhill for comments on an earlier draft.