Arthur B. Markman (1998) Information is not Semantic Content,. Psycoloquy: 9(78) Representation Mediation (13)

Volume: 9 (next, prev) Issue: 78 (next, prev) Article: 13 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(78): Information is not Semantic Content,

Reply to Clapin on Representation-Mediation

Arthur B. Markman
Department of Psychology
University of Texas
Austin, TX 78712

Eric Dietrich
PACCS Program
Binghamton University
Binghamton, NY


Clapin writes that our definition of mediating states requires that their semantic, representational contents be their informational contents. This is not correct. We discuss the distinction that makes semantic (representational) content more than informational content. However, we do have the problem Clapin attributes to us, namely, the problem that the informational source of mediating states must exist. We discuss some solutions to this problem, and tentatively endorse the one that leaves the probabilities equal to one, but changes what the probabilities are about.


compositionality, computation, connectionism, discrete states, dynamic Systems, explanation, information, meaning, mediating states, representation, rules, semantic Content symbols
1. Clapin (1998b) is quite right when he says "Markman & Dietrich's [1998b] reply to my commentary (Clapin 1998a) makes it clear that they are_ presupposing a certain kind of resolution to the content debate, contrary to their protestations." We ARE supposing that a theory with certain kinds of properties will one day adequately explain representation and representational content. In large outline, we do think that information has to play a role in a final theory. But we, don't know, nor does anyone else know, the details of that theory. What role exactly information is to play is yet to be determined. Furthermore, nothing we said entails the view Clapin attributes to us: that information plays the role of fixing_ semantic content.

2. Contra Clapin, it is NOT part of the definition of mediating states that what they mean is the same thing as what they carry information about. We stress again that in our target article (1998a) we offered no theory of representational or semantic content; nor do we endorse Dretske's theory (1981). We sharply distinguish between informational content (which we do define) and semantic content (which we don't). Our definition of informational content is Dretske's, but adopting his definition of informational content is not tantamount to adopting his theory of semantic content. We explicitly left out of our definition of mediating states the technical notion of "nesting" which Dretske needs to get his theory of informational content to be a theory of semantic content (see Dretske, 1981, pp. 70ff. & ch. 7).

3. So, on our view, the semantic content of some perceptual judgment that s has property P, does not come from_ the fact that s is in fact P out in the environment. To say otherwise would be to endorse the idea that semantic content IS informational content; something we definitely do not endorse. So the formula Prob[P(s)|J(P(s))] = 1, while entailing that P(s) (assuming that J(P(s)), does not entail, on our view, that the judgment that P(s) has the content it does BECAUSE s is P. Other things have to occur. This gets us out the problem Clapin attributed to us, namely, that the semantic content of mediating states is their informational content. Of course, given that we haven't said how the judgment (or any mediating state) gets its content, all we have really done is dodge the problem Clapin sets for us. Later in this reply, however, we will return to this issue, because it is central to an issue pointed out by Clapin that really is a problem for the view we espoused.

4. At some point, everyone developing a theory of representational content has to address the problem of representational error. Our approach is no exception to this rule. When we develop a theory of semantic content for mediating states, we will have to explain representational error. The central point of our target article, however, concerned something different. We focused on the generality of mediating states that allows us to see that there is a diversity_ of representational agendas in cognitive science, and that these agendas can all use mediating states, albeit mediating states with different sets of properties. For example, if a particular researcher does not need rules to explain data on the development of motor behavior, then rules should not be used. This researcher's decision about whether to use rules should not require that another researcher should (or should not) use rules to explain a different data set (say on the structure of syntax). If cognitive science adopts our view of mediating states, then there won't be any wasted discussion between researchers about whether there really are rules in the head, or whether rules are the only/best representational structures. Instead, what follows is that different kinds of mediating states can be used at different levels of cognitive explanation. This result is important, because it allows us to preserve the crucial explanatory idea of internal mental states standing in for part of the system's environment without having to say that those states always have to have certain properties such as being enduring, or discrete, or composed out of simpler structures, or abstract, or rule-governed.

5. Though the target article concerned methodology, we do have a problem Clapin attributes to us, namely, that the source of the information has to exist. Our definition of informational content relies on the fact that the conditional probability that something, s, has property P is one given that some cognitive event, e.g., a perceptual judgment that P(s), has occurred. In symbols: Prob[P(s)|J(P(s))] = 1. Clapin points out that our definition entails that given that the judgment that P(s) occurred, s is definitely P. This is a problem because what if s isn't P? For example, suppose you wake up in the middle of the night and see what looks like your dog in your room, and indeed form the perceptual judgment that your dog is lying there. But suppose in the morning you discover that what you took to be your dog is a pile of rumpled clothes. If the conditional probability is one that there is a dog lying there given that you've judged that that object is your dog, then the dog has to be in your room. This seems too strong.

6. We have this problem even if the judgment is defeated in certain ways. Suppose that your rumpled pile of clothes look like your dog because he crawled under them during the night and they took on his general shape (a sock might even have got caught on one of his ears). In this case, because of your ability to recognize dogs and, specifically, your dog (because, that is, you have stored certain recognitional features and concepts, contra Fodor [1998], that allow you to recognize dogs in different lighting and from different perspectives), given the dog-induced shape of your rumpled clothes pile, you couldn't fail to see a dog where the pile is. Hence the conditional probability that there is a dog in the room given that you made the judgment is still one, but your judgment is defeated in a certain way: you came to the right conclusion for the wrong reasons. (Your clothes pile might have accidentally resembled a dog when you tossed them on the floor and quite without your dog crawling under them. Hence your dog wouldn't have to be in the room. Moreover, your clothes pile might not have resembled your dog at all. Lots of rumpled shapes are possible. Your clothes pile only resembles your dog at this time because he is under it.) But even in this case of defeat, it is still required that the dog be in your room (he has to be under your clothes pile). Hence even in this case we have the problem that if the conditional probability is one, the environment has to be the way you judge it to be (even though this is accidental), and hence, there is still no possibility of error (though there is a possibility of being right for the wrong reasons).

7. So, Clapin is right: we do have the problem that whatever mediating states refer to must actually be out in the subject's environment. And this doesn't allow for what we all know to be the case, namely, that we can, and often do, mistakenly judge that s is P even though s isn't P.

8. There are a couple of ways out of this problem. First, we could give up the idea that information is important to the content of a mediating state. This solution seems wrong, though, because the idea that information is important to content is really just the idea that there needs to be a causal link between the organism (or subject) and its environment. But as near as we can tell (and given the lack of success of any purely causal account of content in the literature), the best way to satisfy the intuition that causation is important to content is to go with the idea that information_ is important to content.

9. A second possibility is that we can relax the requirement that the conditional probability must be one (and given that we can mistakes, this pretty much has to be the case). This is something we, but not Dretske can do (Dretske. 1981, chs 2 & 3) because we, but not Dretske, are not interested in making information the sole basis of mediating state content. For example, in our target article, we have a fourth condition on mediating states which we are interested in exploiting in a theory of content: The system must have internal processes that act on and are influenced by mediating states and their changes, among other things; these processes allow the system to satisfy system-dependent goals (though, these goals need not be known explicitly by the system).

10. What we are really after is the idea that information is a basic ingredient in mediating states (and representational) content, but that more is needed. As discussed in our reply to Clapin's first commentary, we are inclined to think that an interactive approach might work. On this view, the subject collects information from many different sources and compares it all in an effort to confirm or deny its expectations. These expectations might have the form "given that I have received X as input, if I do Y, I should then receive Z." On this view, information is actively sought, but also on this view, all that matters is that the information cohere with itself in a certain way relative to internal processes. Hence all that matters is that the information should be reliable_. Information can still be reliable if the conditional probabilities are less than (but near) one, and that is all we need.

11. Note that on this second solution we automatically get the result that contents of mediating states are not identical to the information they carry. To say, e.g., that .6 < Prob[P(s)|J(P(s))] < 1 is to say that the signal is NOT carrying the information that s is P (see Dretske, pp. 57-67, esp. p. 66). Hence if the mediating state has the content that s is P, it had to get it some other way. We don't know what that way is, other than the speculation we offered in paragraph 10 above.

12. The reasoning in the preceding paragraph suggests that we really do want the conditional probability to be less than but near one. This move would allow us to say that information is a part of but not all of the story of mediating state content. However, there is a third strategy. We can have the conditional probability equal one so long as we are careful what the probability is about. This way of putting things allows us to strengthen our notion of mediating states by incorporating different levels of content.

13. Consider what really goes on when someone mistakes a pile of clothes for a dog. Over time (e.g. while growing up), the person has learned a set of perceptual features that reliably indicate the presence of a dog. Enough of these features got activated by looking at the pile of clothes that the person said "There's my dog". Some of the features that people learned are quite specific and tied to the perceptual environment (e.g., the presence of particular edges, as detected by patterns of light on the retina). Other features are more abstract, and may arise as a function of many different possible patterns of sensory stimulation (e.g., ears or snout). The more abstract features are crucial for object recognition, because they allow a person to recognize dogs they have never seen before (see Biederman, 1987, for a similar discussion). On this view, these abstract features are reliable indicators of the presence of a dog. However, barring a view of concepts in which there are necessary and sufficient features that identify an object as a member of a category, the probability that an object is a member of a category given a set of features is less than one even if the probability that those features are present in the environment given a particular pattern of sensory stimulation is exactly one. Thus, the nature of the conceptual system introduces the possibility of error in category judgments in order to allow it to recognize novel instances in the world.

14. Here is our proposal in more detail. Recognitional features are mediating states, but these features are NOT the perceptual judgment that there is a dog present; rather they are the low-level building blocks of such perceptual judgments. So our proposal is that the informational conditional probability should be associated with the activation of these features (these mediating states), rather than with the high-level perceptual judgment itself. Now we can say that the conditional probability is one, thereby tying mediating state content to information without having to identify_ that content with the information. Here is how this works. Let F* = FD1, FD2, ..., FDn be a set of (mental) features active in some subject and sufficient for recognizing a dog. These are only a subset of all the features the subject can use in recognizing a dog, but they tend to work very reliably. Let E* = EC1, EC2, ..., ECn be environmental conditions, whatever those are, that cause the dog features to become activated. Then statement (1) is true:

    (1) Prob[E* | F*] = 1.

It IS true that there are conditions in the subject's environment that are sufficient for activating the subject's dog features because, in fact, those features got activated. They must have been activated some way. And the only way to do it is via some environmental conditions. Normally, when things are working well, activating those features leads, via the construction of the right sorts of higher mediating states, to the right conclusion that there is a dog present. But those features can be falsely activated by a rumpled pile of clothes in dim light. In the latter case, the high-level perceptual judgment that there is a dog present is wrong, but the features nevertheless got activated for the right reasons, namely, there were the right low level features in the environment. This gives us a multi-tier approach to representational error: the low-level mediating states are not in error, but higher level states may introduce error because of the process of making categorical judgments. Thus, additional levels of mediating states may provide a road toward a solution to the problem of misrepresentation.

15. Now we get the results we want: because information is involved in the content of low-level mediating states, information is part of the story of how high-level mediating states get their content. Information is not the whole story about content, however, because high-level mediating states in conjunction with the processes that make category judgments can be in error and hence they must get their (high-level) content, in part, from their interactions with other high-level mediating states and from the cognitive processes that operate on them. So now our view seems to have the kinds of components that look promising for developing a theory of the content of mediating states. On the one hand we have the informational dimension of our mediating states, which provides information about content in a bottom-up fashion. This aspect of mediating states supplies the causal component. In addition, we have an aspect of content supplied by the cognitive processes operating on the internal states. This aspect supplies the conceptual role, or functional role component. On this view of mediating states, content is a multi-tiered affair. But this conclusion is generally compatible with our view that mediating states exist at many different levels of the cognitive system. It is quite plausible and reasonable that the contents of mediating states at lower levels contribute to the contents of mediating states at higher (more abstract) levels. Finally, it may be interesting to explore a combination of this third answer to Clapin's objection with the second one (that the conditional probability is less than one), which may lead to a theory of content at different levels. We would like to thank Clapin for raising this point, however, because it has given us an opportunity to work through these difficult issues.


Biederman, I. (1987). Recognition-by-components: A theory of human image understanding. Psychological Review, 94(2), 115-147.

Clapin (1998a) Information is not representation. PSYCOLOQUY 9(64)

Clapin (1998b) Mediation, information, and error. PSYCOLOQUY 9(74)

Dretske, F. (1981). Knowledge and the Flow of Information. Cambridge, Massachusetts: MIT Press.

Fodor, J. (1998). Concepts: Where cognitive science went wrong. Oxford, UK: Oxford Univ. Press.

Markman, A.B. & Dietrich, E. (1998a). In Defense of Representation as Mediation. PSYCOLOQUY 9(48)

Markman, A.B. & Dietrich, E. (1998b). Mediating States, Information, and Representation: Reply to Clapin on Representation-Mediation. PSYCOLOQUY 9(66)

Volume: 9 (next, prev) Issue: 78 (next, prev) Article: 13 (next prev first) Alternate versions: ASCII Summary