Pat Hayes (1993) Problems With Frames. Psycoloquy: 4(22) Frame Problem (6)

Volume: 4 (next, prev) Issue: 22 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 4(22): Problems With Frames

PROBLEMS WITH FRAMES
Reply to Freeman on Ford & Hayes on the Frame Problem

Pat Hayes
Beckman Institute
University of Illinois
Urbana IL 61801

Ken Ford
Institute for Human & Machine Cognition
University of West Florida
Pensacola, FL 32514

phayes@herodotus.cs.uiuc.edu kford@trivia.coginst.uwf.edu

Abstract

It is clear that Freeman (1992) has not read the book itself, but is only responding to the summary of it that appeared in PSYCOLOQUY (Hayes 1992) he therefore appears to miss the point in several respects.

Keywords

Frame-problem, artificial intelligence, temporal logics, independent persistence, attention, Hume, dynamic frames, qualification problem.
1. In his review of Ford & Hayes (1991), Freeman (1992) makes it clear that he has not read the book itself, but is only responding to the summary of it that appeared in PSYCOLOQUY (Hayes 1992). It is probably as a result of this that he appears to miss the point in several respects. His review is in five parts; we will respond to each in turn, addressing the arguments and assertions that Freeman offers in each paragraph.

2. In paragraph 1.0, Freeman begins by noting that in the summary of our book, we evoke the image of a child playing with bricks. He writes that "this appeal to an example of biological intelligence does not seem to be pursued in the topics listed in the Table of Contents." The story about the child and the brick, however, was offered as an example not of "biological intelligence," but of ordinary, everyday intelligence, regardless of how it is implemented. Freeman seems to assume an a priori distinction between biological and other intelligences, which begs the questions at issue.

3. The rest of Section 1.0 of Freeman's review is given over to a long quote by John von Neumann (1958) that makes rather the opposite point from the one Freeman supposes. Ironically, von Neumann (1958, cited by Freeman) observed that from his perspective, the most striking aspect of the nervous system is that its functioning is prima facie digital. Von Neumann, with his usual insight, recognized that some kind of internal language must be used by the brain, and that conventional mathematics and logic may be a kind of constructed secondary language built much like a virtual machine (or high level representations) on top of a more primary lower-level language. He clearly appreciated that the higher levels of a system are rarely isomorphic or even similar to the structure its lower levels.

4. The unwarranted (and usually implicit) assumption that there ought to be similarity of structure among the various levels of organization of a computational system leads to great confusion. Any complex computational model will have many such levels of description, all equally valid. A full account of the machine's behavior will probably in practice require a description at several levels, but no simple reduction to a single level of rule-matching will suffice, and certainly the basic hardware level will not be suitable for understanding the complexity of the machine's behavior. There is no mystery here: such simple reductionism can be similarly rejected by considering any other sufficiently complex system or device.

5. The frame problem can be stated quite clearly with no reference to the notation used to formalise it. It arises from the ontological framework of world-states and actions which is often (in AI) taken to be the basis of meaning. Whether these are encoded as sentences of mathematical logic or as distributed patterns of neural activity -- even if we were to accept that these are distinct -- is irrelevant to the expressive difficulty it identifies.

6. In his paragraph 1.1, Freeman asserts that "framing" is a dynamic process, offering offers his rabbit experiments (Freeman, 1991) as evidence. Now, whatever Freeman means by "framing," it seems to have little to do with the frame problem that is the topic of discussion here. Freeman's notion of framing seems to have something to do with the brains of rabbits, and so presumably with those of humans. However, the AI frame problem arises in a context which assumes that the content of mental representations can be captured largely independently of how it is encoded in neurons. No doubt Freeman would rather not talk about representations at all, and that's his perogative, but then he is also not talking about the frame problem.

7. In his paragraph 1.2, Freeman writes: "[It] follows that centrally stored information about the environment is not invariant." Who would think that it was invariant? Why must it be centrally stored? Again, what does this have to do with the frame problem? The frame problem arises in reasoning about change, not in accounting for the structure of memory.

8. In paragraph 1.3, Freeman points us toward the mysteries of chaotic dynamics simulated by networks of nonlinear ordinary differential equations as a possible solution to what he perceives to be AI's difficulties, including the frame problem. These ideas are currently very fashionable, and often vaguely suggested as being something to do with how psychology might emerge from neurology, but those who are trying to give careful models of this connection do not seem to find them of much direct use.

9. Concerning Freeman's paragraph 1.4: In saying that when a child decides to pick up a brick it knows that it will then be in the air and "that's all," we did not mean to imply that that is all the child knows: rather, the child knows that the rest of the dynamic world will not be altered by the act of picking up. Children understand how their actions are limited in their effects, but this is hard for us to formalize. Perhaps this is because logic is not the appropriate language to encode the child's beliefs (a seductive thesis, given how hard this all seems), but then Freeman must indicate how the brain manages to think about bricks and tables at all, and why these methods are somehow inaccessible to logical formalization. The "new approaches" Freeman refers to are indeed interesting, but they must somehow be made to account for symbolic thinking (see, for example, Shastri & Ajjanagadde 1993), and the frame problem then arises. The problem is one of representation, and comes up regardless of the implementational technique used to realise such thinking.

10. By the way, Gibson's (1979) "ecological approach" is very much in line with the way AI thinks about perception. Both agree that it is important to look at the world from the point of view of the perceiver's conceptual framework. Gibson was always opposed to the idea of mental representations because he (wrongly) assumed that they implied a homuncular fallacy, but AI has adopted many of his ideas enthusiastically.

REFERENCES

Ford, K & Hayes, P (eds) (1991) Reasoning Agents in a Dynamic World: The Frame Problem. JAI Press.

Freeman, W. J. (1992) Framing is a Dynamic Process. PSYCOLOQUY 3(62) frame-problem.3.

Hayes, P. J. (1992) Summary of: K. Ford & P. Hayes (1991): Reasoning Agents in a Dynamic World: The Frame Problem. PSYCOLOQUY 3(59) frame-problem.1.

Freeman, W. J. (1991) The Physiology of Perception. Scientific American 264: 78-85.

Gibson, J.J. (1979) The Ecological Approach to Visual Perception. Boston: Houghton Mifflin

Shastri, L. and Ajjanagadde, V. (1993) From Simple Associations to Systematic Reasoning, Behavioral and Brain Sciences (to appear, with Commentary).

Von Neumann, J. (1958) The Computer and the Brain. New Haven: Yale University Press.


Volume: 4 (next, prev) Issue: 22 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: