Patrick J. Hayes (1993) Modeling our Adaptive Intelligence, not God's. Psycoloquy: 4(42) Frame Problem (12)

Volume: 4 (next, prev) Issue: 42 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(42): Modeling our Adaptive Intelligence, not God's

MODELING OUR ADAPTIVE INTELLIGENCE, NOT GOD'S
Reply to Fetzer on Ford & Hayes on Frame Problem

Patrick J. Hayes
Beckman Institute
University of Illinois
Urbana, Il 61801

Kenneth M. Ford
Institute for Human & Machine Cognition
University of West Florida
Pensacola, FL 32514

phayes@herodotus.cs.uiuc.edu kford@trivia.coginst.uwf.edu

Abstract

Fetzer misunderstands our use of the term "frame problem", taking the term to refer to, contrary to the original definition (McCarthy, 1963), a larger problem of change, which is an instance of the classical problem of induction.

Keywords

Frame-problem, artificial intelligence, temporal logics, independent persistence, attention, Hume, dynamic frames, qualification problem.
1. Fetzer (1993) states his views on the frame problem, induction and common sense more clearly than in his chapter in our edited volume on the frame problem (Fetzer 1991; Ford and Hayes, 1991). As a result, we think one can see some of the mutual misunderstandings that have given rise to many of the disagreements between us. This short Reply attempts to clarify matters further.

2. Whenever sane people each find the other's view so obviously wrong that it is hard to see how any rational person can hold it, one should suspect a failure in communication. Just such a failure seems at work here. First, the term "frame problem" is being used in different senses. Second, "common sense" is being taken to serve different roles.

3. We apologize for having to reiterate this elementary point of scholarship, but the term "frame problem," as introduced by John McCarthy in 1963, refers to an apparently simple but annoyingly stubborn technical problem in describing change in logic. That this particular problem is one of representation is simply a matter of definition. This "narrow, technical" frame problem is what the term has been taken to mean throughout the AI literature on the subject for the past 20 years.

4. Fetzer evidently takes the term to refer to something else, a broader problem which is not merely a problem of representation. Fetzer never adequately defines his larger problem, unfortunately, but we take it that his position can be summarized thus: the larger problem is the "problem of change," which is an instance of the classical problem of induction, and the narrow problem has the larger one inextricably bound up in it, so that it is impossible to solve the narrow problem without (first) solving the larger one. This might all be true given Fetzer's (mis)understanding of the frame problem; however, as noted above, the term "frame problem" already has a definition (conveniently provided by those who first described the problem). Fetzer is unfortunately not alone in his penchant for redefining the frame problem as equivalent to, or a special case of, other problems. But as with the frame problem, these also already have labels and meanings associated with them, for example, the problems of induction, relevance, and symbol grounding, to name but a few. It would be a boon to communication and understanding if we could agree on terminology.

5. We have much sympathy with the second of these ideas, if we understand it correctly. It seems to be similar to the idea expounded by (McCarthy and Hayes 1969) that it is good methodology to first decide what beliefs are to be represented, before deciding how to represent them. And in this case, the frame problem has arisen in efforts toward designing logic-based representational schemes for expressing an agent's beliefs about the simple dynamics of the everyday world. The problem is how to represent and organize just what it is that we know, when we believe, say, that pouring milk into a saucer will only make things on a table change their positions under very unusual circumstances, or that the shape of a penny means that it is likely to lie flat if left alone on a hard surface, or that color is changed only by a limited collection of rather drastic operations such as painting.

6. Notice that the problem is not that of determining whether such statements are true. Rather, it is: what exactly are their content? And this is, we think, where Fetzer parts company from the AI tradition. Fetzer is concerned with the classical scientific problem of how to describe processes of change in the actual world. He is after the facts. Our aim is more like that of psychology: it's focus is not the world, but people's everyday beliefs about the world.

7. In writing this response we found ourselves trying to be charitable as we wondered what could account for Fetzer's confusion here, and indeed, to be fair, AI writers are sometimes not quite careful enough in making the relevant distinctions. For example, Hayes (1987), has stated that the frame problem "is to find a better way of expressing the stability in which we live and on which the success of our everyday reasoning probably depends," which certainly sounds as if it were concerned with truth. When viewed from outside a representational perspective and the context of the technical discussions on logic in which it originally emanated, it could indeed sound as if the frame problem were one of describing how things really do change.

8. There is a key ambiguity in the use of "we" and "our." Since human success is what motivates us, we often refer to human talents by using "we": Thus, this brief quoted passage could have referred to "the stability in which humans live and on which the success of their everyday reasoning probably depends." But we also use the first person to refer to AI scientists who are trying to design an artificial thinker with human talents, and this may be causing the confusion. Let us therefore use the third person to refer to humans in general, as though we scientists were somehow a separate species. Since humans are as well fitted into their ecological niche as any other successful creature, they have a particular way of conceptualizing and perceiving the world. It is not necessarily what will ultimately turn out to be the accurate way of doing so, of course, since it has evolved in the context of people's special dimensions, perceptual abilities and so forth: nevertheless, our own cognitive structure seems to be impressively successful and adaptable, more so than that of any other creature on the planet. This is what we wish to study.

9. But since we are human, we can use introspection to investigate this structure. This is admittedly a dangerous procedure, but still a useful one, and sometimes -- as in linguistics -- it seems to be an essential part of the scientist's empirical repertoire. Consequently, the scientist is both investigator, with complete freedom to seek and state truth however seems best -- and subject, who must report his impressions accurately. Much linguistics involves reporting "feeling" that some sentence is "right." Similarly, in trying to capture "common sense," much of what we do is report that some conclusion seems "obvious"; and just as in linguistics, there is a constant need and effort to establish regions of fairly firm agreement between subjects. Likewise, AI prose often uses the first person to engage readers in this effort, to ask them implicitly to probe their own intuitions to confirm the claims being made about human intuitions in general. We communicate both as scientists and as the living subject matter of our science. But we must keep the distinction clear. The frame problem arises because we scientists can't make the formalisms behave in a way that seems to us as subjects to be just plain obvious. But nothing here has to do with whether these formalisms express scientific truth. The world as described by quantum physics, for example, is not the niche that humans have evolved to inhabit.

10. Fetzer refers to "common sense" witheringly. Surely, he urges, this petty mishmash of ideas is unlikely to help us state the real causal rules by which the world governs its behavior. Fetzer assumes we are after better ways of getting at the truth than this, and he urges us to turn to science, or perhaps philosophy. But this misses the point of the entire enterprise within which the frame problem arises. AI begins with a profound respect for ordinary knowledge, because it seems to be the material of the mental fabric which enables humans to make rapid and fairly reliable predictions about their world. Our technical aim is not to replace it, but to build a computational simulacrum of it. We accept "common sense" as a given and seek to understand how it could work. "Common sense" is not an inadequate solution or a weak method, it is our subject-matter. Fetzer may disagree, but then he is simply disagreeing about what he finds interesting.

11. Take Fetzer's example of a stabbing through the heart. As he points out, this might not be fatal if the recipient is on a heart-lung machine. As pointed out by Hayes (1991), one can play this game forever: perhaps the stabbing caused a violent reaction which disrupted the machine's functioning, leading to a fatality. Or in turn, maybe this disruption triggered an alarm which summoned a nurse in time to fix everything. (Imagine a party game involving adding an extra condition which reverses the outcome, and try playing it.) What conclusions should one draw, then, from examples like this? If one's goal is to state true necessary and sufficient causal conditions for a given outcome -- Fetzer's goal -- the conclusion seems pessimistic. If, however, one's goal is to understand how human reasoning can draw appropriate conclusions, one would probably be led to conclude that while it is in general impossible to state complete causal conditions, humans seem to be able to draw reasonable conclusions from partial causal descriptions, apparently by using heuristics which conclude, perhaps tentatively, that no other relevant causal influences are present. All of this is "common sense." Several computational frameworks for such optimistic but tentative causal reasoning have indeed been constructed.

12. Two final remarks. Fetzer accuses one of us (Hayes) of Pickwickianism for insisting that the frame problem is a narrow one of representation when the larger "problem of change" has no solution in sight. If we were trying to make a computational scientist whose grasp of the real world was secure against any future shift of ideas -- an artificial God, perhaps -- then we might agree. But this is Fetzer's misunderstanding again revealed. Our (ultimate) dream, to create an artificial human-level thinker, is more modest. It is not Pickwickean to observe that nature has produced human-level thinkers who are not prone to framish problems, and to conclude that these difficulties we are finding in designing our artifacts must be solvable or somehow avoidable.

13. It is also not Pickwickean to observe that these human-level thinkers manage to succeed in getting about in the world apparently without the benefit of having had philosophers or scientists solve the "problem of change." We suspect there is a pun here on this phrase. Fetzer's "problem of change" is the problem of predicting the future accurately given a partial description of the present. Our "problem of change" is to understand how it is possible for people to do this rapidly and reliably most of the time. These are not the same. For example, a central idea in our problem of change seems to be that of utilizing the limits of one's knowledge to conclude that something unexpected is not going to occur, but having the inferential flexibility to adjust one's beliefs to accommodate to it when it does happen. This ability to reason "nonmonotonically" has no particular mandate when one is trying to get the real facts straight.

14. We have further disagreements with Fetzer about the right way to describe scientific laws and other matters. But we concede that we are amateurs on his philosophical territory. Rather than fill up the internet with more arguments, we are content to watch Fetzer (1993), van Brakel (1993) and other professional philosophers cite such eminences as Hempel and Salmon at one another.

15. In summary. AI takes it as established that "common sense" exists, finds it of interest, and seeks to study it. Fetzer is interested in ways of characterizing the form of ultimate causal truths. These are different goals with different presuppositions and different methods. Fetzer is welcome to pursue his own goal, but he should not get it confused with ours, or think that his ideas and methods are relevant to ours. We should just agree to disagree and each continue with our own business.

REFERENCES

Fetzer, J.H. (1991a) The Frame Problem: Artificial Intelligence Meets David Hume. In: Ford & Hayes (1991) 55-70.

Fetzer, J. H. (1993) Philosophy Unframed. PSYCOLOQUY 4(33) frame-problem.10.

Ford, K. and P. J. Hayes (eds.) (1991) Reasoning Agents in a Dynamic World: The Frame Problem. Greenwich, CT: JAI Press, 1991).

Hayes, P. J. (1991) "Commentary on `The Frame Problem: Artificial Intelligence Meets David Hume'." In: Ford and Hayes (1991) 71-76.

McCarthy, J (1963) Situations, Actions and Causal Laws. Stanford Artificial Intelligence Project, Memo 2.

McCarthy, J and Hayes, P (1969) Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer & D. Michie (eds) Machine Intelligence 4. Elsevier.

Van Brakel, J. (1993) Unjustified Coherence. PSYCOLOQUY 4(23) frame-problem.7.


Volume: 4 (next, prev) Issue: 42 (next, prev) Article: 12 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: