Pat Hayes (1993) Effective Descriptions Need not be Complete. Psycoloquy: 4(21) Frame Problem (5)

Volume: 4 (next, prev) Issue: 21 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(21): Effective Descriptions Need not be Complete

Reply to Van Brakel on Ford & Hayes on the Frame Problem

Pat Hayes
Beckman Institute
University of Illinois
Urbana IL 61801

Ken Ford
Institute for Human & Machine Cognition
University of West Florida
Pensacola FL 32514


One can approach van Brakel's (1992) review of Ford & Hayes (1991) in two different ways: as a scholarly critique of the frame problem (FP) in general, or as an argument that traditional AI is inadequate to handle a new fundamental problem that he dubs the "problem of complete description" (PCD). We respond on both of these levels.


Frame-problem, artificial intelligence, temporal logics, independent persistence, attention, Hume, dynamic frames, qualification problem.


1.1 One can approach van Brakel's (1992) review of Ford & Hayes (1991; henceforth F&H) in two different ways: as a scholarly survey and critique of the frame problem (FP) in general (and our book in particular), or as a sketchy argument that traditional AI is inadequate to handle a new fundamental problem that he calls the "problem of complete description" (PCD). These require rather different kinds of response.


2.1 The FP was defined over twenty years ago (McCarthy and Hayes 1969). It has been described very clearly several times (Hayes 1987; Haugh: F&H, p. 106) so we will not describe it again here. The subsequent literature often confuses this problem with others, creating so much confusion that the original meaning of the term has often become lost, as Stein (F&H: p. 219) laments. Van Brakel confuses it with something new.

2.2 Van Brakel begins with a precise but faulty definition of the FP. Let us call the problem that van Brakel defines the VBFP (Van Brakel Frame Problem). The VBFP includes the problem of giving "necessary and sufficient conditions for an event to occur." The VBFP and the FP differ because the general enterprise of formalizing common sense knowledge, within which the FP arises, does not attempt to give such necessary and sufficient conditions for success. Such conditions probably cannot be found in general, as van Brakel notes (and as others have emphasized). Thus, the VBFP is subsumed under the PCD precisely where it differs from the FP.

2.3 Even if we had the PCD solved, the FP would still be a problem. Suppose we could give necessary and sufficient conditions for an action to be performed successfully. Then, presumably, given a sufficiently rich description of a world-state and an action, we could say with confidence when the action would succeed. In what does this success consist? Presumably, some changes to the state of the world are wrought by the action. However, in the common sense reasoning situations that we all encounter every day, most of our actions change the relevant aspects of the world only slightly, and most actions change most of it not at all. How does our representation of this successful action compactly reflect this apparent invariance that we find to be a pervasive fact about our everyday world? This is the FP, and we can have as much confidence as we like in the preconditions of the action without making much headway with it. (One might argue that in order to establish the confidence in the preconditions one would have to know the effects, so a solution to the PCD would entail a solution to the FP. But this would reduce the PCD to the FP.)

2.4 One could leave van Brakel's discussion of the FP here, but it deserves a bit more analysis, lest it add more confusion to an already confused discussion. In F&H (1991: p. ix), we distinguished the original meaning of the term FP from its use to refer to a complicated family of related but distinct problems. Unfortunately, van Brakel [1.2] introduces an even larger collection of such problems. We will do a brief survey here: The terms "temporal projection" and "inertia" refer to ideas put forward to solve the FP, not other problems. The planning, ramification, relevance and extended prediction problems are varieties of the general task of making relevant inferences from a set of axioms; they are not problems of representation. The qualification problem is this: almost any belief has exceptions, so how can a representational system escape the apparent rigidity of logic in insisting that any exception is a contradiction? (Much useful progress has been made on this one, by the way.) All of these are different from the original FP. (The terms "installation problem" and "holism problem" are new to us, and we don't believe they appear in the AI literature.) The danger in this kind of confusion can be seen in the next paragraph, where various reasons are cited for the difficulty of the FP. But at one point Hayes's chapter (F&H: p. 72) is referring to the FP, at another (p. 73) to the qualification problem; Nutter's chapter (F&H: p. 176] is referring to the inference problem, and Dennett (1987) to something else again -- what might be called the perception problem or the updating problem.

2.5 Van Brakel, like other philosophers who have ventured into this territory (Fodor 1987, Fetzer: F&H, p. 55), believes he can detect evidence of a single deep problem all over the place. This deeper problem is, he concedes, perhaps partially understood in an "implicit or `intuitive' way" by the AI technicians, but they cannot see it with the clear vision of a philosopher. In fact, they often adopt an "ostrich" approach to it, putting the real work off into the future and playing with minor matters. Fodor similarly talks of a "music of the spheres" which cannot be heard by those too close to it and requires a philosophical ear to detect.

2.6 We are rather sceptical about such an attitude when it is accompanied by such a thorough lack of technical understanding. Van Brakel misstates the FP, even acknowledging that the crucial mistake (his condition [A1] ) is not found in the technically competent literature; he then derives a false assertion, and goes on to claim to find evidence for this "lurking" everywhere, justifying this claim by a series of brief quotes. But this habit of snappy quotation takes phrases out of context in dangerously misleading ways. For example, the "implicit assumption that the agent doing the reasoning has complete knowledge of the relevant facts" (pointed out in their chapters in F&H by Haugh, Morgenstern, Tenenberg and Weld) is indeed a problem; but the problem is not that we cannot give such "complete knowledge" -- the PCD -- but rather that this assumption of complete knowledge is in fact wrong, and we don't know how to represent that. This is almost the inverse of the PCD. Again, the old idea of a "situation" is indeed one involving a complete state of the universe at a moment of time; but the situation calculus does not aspire to achieve a complete description of a situation; quite the contrary: McCarthy (1963) refers to "rich objects" which can never be fully described, and he utilizes this in the properties of his calculus. Situations do not suffer from a PCD; they rejoice in the impossibility of CD.


3.1 Still, is the PCD another central problem for AI? Perhaps van Brakel has discovered a new problem that we have to solve. But he has not, for the PCD is irrelevant to AI.

3.2 It seems clear that in general, complete characterizations of things, in the sense of necessary and sufficient conditions for them to exist, or correct biconditional definitions of the concepts, cannot be given. CD is impossible. But achieving such complete descriptions is not (and never has been) a goal of most AI formalization; nor, it would seem, should it be (since it is impossible). We fail to see any problem here. If van Brakel regards this as a policy of denial, then we exult in denial; but he seems to be simultaneously asserting that CD is impossible and that AI must face up to the need to achieve it. You can't have it both ways: It is not reasonable to expect AI programs to aspire to omniscience and then complain that the omniscience problem is too hard.

3.3 Perhaps it should be emphasized that the strategy of building an axiomatic description of an environment does not imply that one must achieve or seek "complete" descriptions. Axiomatic formulations often (indeed, usually) admit several alternative interpretations. Nevertheless, useful inferences can often be drawn from them. There is no need to have definitions of any of the terms in the formalization. The sentences form a web of constraints on possible meanings of the terms, but they need not (and often, provably, cannot) uniquely pin down these meanings. The idea that they should do so, or must in order to achieve some kind of adequacy, is a legacy from the use of logical formalisms to establish the foundations of mathematics, a very different kind of enterprise.

3.4 It seems likely that most of human conceptual knowledge is of this character. Someone might know a great deal about, say, goldfish, or copper, or how to whittle a whistle, without being said to have complete knowledge of this, or to be able to give definitions of the words, or even knowing how such definitions might be constructed. A demonstration may help. Perhaps you, the reader, have never come across the term "bulbs of percussion." The Appendix [6.1] explains it. (It is amusing to make some guesses first.) After reading the Appendix you will have a fairly good idea of what this means and will be able to explain it to others. Your understanding will be linked to other concepts you already have, and it will in turn depend for its richness and depth on how rich and deep these are: Some of you will know more about that area than others do. But you won't have a complete description of it, whatever that would be. You probably won't be able to answer all possible questions about bulbs of percussion (how big are they? what color?), and you may not be able to recognize one if it were presented to you. Most of our knowledge is like this, it seems: partial, more or less sketchy, but all connected together, and adequate to support the inferential and pragmatic tasks that confront us from day to day. That is what AI is trying to capture in its representational games.

3.5 So it seems that "complete descriptions" are not used by human thinkers, cannot be given in general, and have no particular connection to the goals of AI. No problem.


4.1 We could forget the FP, however, and read van Brakel more sympathetically (and therefore less carefully). There is a larger issue in his critique, and it is one that other philosophers have raised. It has also been called the problem of meaning (Searle 1983) or "grounding": How can we capture the meanings of concepts in representations? Can a collection of, say, sentences in a logic, ever be said to somehow encode or correspond to real, living beliefs, beliefs with intentional meaning, beliefs which are about something?

4.2 We believe that this is what is really worrying van Brakel, and why he regards the PCD as so important. Although he does not state it quite explicitly, his argument goes something like this.

    [a] Any formalization in the tradition of logicism must use symbols.
    [b] The meaning of these symbols must somehow be specified completely.
    [c] But this cannot be done without using other symbols, since...
    [d] ...definitions of terms themselves use symbols.
    [e] Therefore, logistic (linguistic) representations must be inadequate.

The mistake, as we have observed, is [b]. But the general worry is valid: how can we ever attach "formal" symbols to the actual world? This is what Harnad (1990) calls the "symbol grounding problem."

4.3 Van Brakel's implied solution is to move away from knowledge representation. He waves us in the direction of a more physicalist approach to AI, citing Clancey (1991), Dreyfus & Dreyfus (1990) and Brooks (1991) with approval. Why is this the right approach? Here is his argument in its entirety:

    "During human evolution, "nature" has hit on a practical solution
    to the frame problem. We have to find the PHYSICAL description of
    how humans do it."

Van Brakel seems to assume that any account of how cognition evolved can only be given in physical terms. This is quite unjustified, however. In fact, all the obvious evidence suggests that the differences between humans and other mammals at almost all physical levels of description, from chemistry up to detailed neuroanatomy, are relatively minor. Right now, the best places to look for insights into the content of mental structure seem to be cognitive and developmental psychology. Neuroscience is advancing rapidly and in exciting ways, but any such "physical" insight can only relate to a theory of mental structure via some mapping between mental structure and architecture, and these all involve some notion of representation (Shastri 1993). Brooks's "mobots" model primitive insects and have no mental life whatever. (Van Brakel points out "But these mobots really exist in their world." True: in fact, anything that exists, really exists in its world. So what?)


5.1 Like other critics, notably John Searle (1980), van Brakel assumes a highly oversimplified model of AI representations, typically based on Schank's scripts (Schank & Abelson 1977). This seems to lead him to several unjustified assumptions, including that logistical representations cannot represent time-varying information, unconscious or "moral" beliefs, and so forth. Van Brakel's only argument is the bald assertion that the kind of "relational" knowledge of, for example, chairs, which a child obtains from acquaintance with them "has little to do with the descriptive knowledge in a "chair-script." We challenge those who make such "arguments" to explain carefully why any kind of knowledge should be excluded from the range of symbolic representations.

5.2 Finally, to return to nitpicking, there are many other confusions in von Brakel's discussion. He talks of using the "world as its own model." This phrase is used in parts of AI as a slogan for the observation that an intelligent agent can often rely on its environment to provide it with certain information in its own future, which it therefore does not need to remember or encode internally. It would be more accurate to call this idea "the world as a notebook." The question of the extent to which this is a useful strategy is interesting, but used in this way as a methodological slogan it is misplaced, because the position it opposes is a fantasy. Nobody has suggested that an intelligent agent, machine or otherwise, must carry the burden of total observation of the environment. That would be to fall into the clutches of the PCD.


6.1 In the Stone Age, craftsmen made flint tools by skillfully striking one stone on another. The first hard blow, designed to split a new stone in a certain way, typically produced a series of concentric shock marks radiating out from the point of impact on the smooth surface of the split flint. Archeologists call this pattern of marks, still visible on many stones, the "bulb of percussion." (Thanks to Jim Doran, University of Essex, for this example.)


Boden, M.A. (ed.) (1990) The Philosophy of Artificial Intelligence. Oxford: Oxford University Press.

Brooks, R.A. (1991) Intelligence without representation. Artificial Intelligence 47: 139-159.

Clancey, W.J. (1991) The frame of reference problem in the design of intelligent machines. In Architectures for Intelligence: The Twenty-Second Carnegie Symposium on Cognition (K. vanLehn, ed.). Hillsdale NJ: Lawrence Erlbaum, pp. 357-424.

Dennett, D (1987) The Intentional Stance. MIT Press

Dreyfus, H.L. and S.E. Dreyfus (1990) Making a mind versus modelling the brain: Artificial intelligence back at a branch-point. in Boden (1990) 309-333.

Fodor, J (1987) Modules, Frames, Frigeons, Sleeping Dogs and the Music of the Spheres. In: Pylyshyn 1987.

Ford, K & Hayes, P (eds) (1991) Reasoning Agents in a Dynamic World: The Frame Problem. JAI Press.

Harnad, S. (1990) The Symbol Grounding Problem. Physica D 42: 335-346.

Hayes, P (1987) What the Frame Problem Is and Isn't. In Pylyshyn 1987.

Hayes, P. J. (1992) Summary of: K. Ford & P. Hayes (1991): Reasoning Agents in a Dynamic World: The Frame Problem. PSYCOLOQUY 3(59) frame-problem.1.

McCarthy, J (1963) Situations, Actions and Causal Laws. Stanford Artificial Intelligence Project, Memo 2.

McCarthy, J and Hayes, P (1969) Some philosophical problems from the standpoint of Artificial Intelligence. In B. Meltzer & D. Michie (eds) Machine Intelligence 4. Elsevier.

Pylyshyn, Z. (ed) (1987) The Robot's Dilemma. Ablex.

Schank, R. C. & Abelson, R. P. (1977) Scripts, Plans, Goals, and Understanding: An Inquiry into Human Knowledge Structures. Erlbaum.

Searle J. R. (1983) Intentionality, an essay in the philosophy of mind. Cambridge University Press.

Searle, J. R. (1980) Minds, brains and programs. Behavioral and Brain Sciences 3: 417-457.

Shastri, L. and Ajjanagadde, V. (1993) From Simple Associations to Systematic Reasoning. Behavioral and Brain Sciences (to appear, with commentaries).

Van Brakel, J. (1992) The complete description of the frame problem. PSYCOLOQUY 3(60) frame-problem.2.

Volume: 4 (next, prev) Issue: 21 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary