John H. Andreae (1998) A Robot Brain With Mediating States. Psycoloquy: 9(59) Representation Mediation (5)

Volume: 9 (next, prev) Issue: 59 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 9(59): A Robot Brain With Mediating States

A ROBOT BRAIN WITH MEDIATING STATES
Commentary on Markman-Dietrich on Representation-Mediation

John H. Andreae
Dept of Electrical & Electronic Engineering
University of Canterbury
Christchurch, New Zealand

andreae@elec.canterbury.ac.nz

Abstract

The concept of a mediating state is identified in general systems theory, but it becomes valuable in explaining how a general learning robot develops a relationship with its environment. The definition of mediating state should be given in terms of Shannon's (1948) original concept of information.

Keywords

compositionality, computation, connectionism, discrete states, dynamic Systems, explanation, information, meaning, mediating states, representation, rules, semantic Content symbols
1. In engineering, there are two main ways of describing a system, the input-output method and the state-transition method. In the first, we express output as a function of input and time. This can be viewed as an historical approach because the present output is derived from the history of the system's input from when the system was in some initial state long ago. The second way is more useful because it avoids the need for an input history. In this state-transition approach, we distinguish two processes: (i) the change of state due to current state and current input and (ii) the output arising from the current state (and, optionally, the current input as well). The state is given by the contents of the memory elements of the system because if we know what is in every memory element (be it an electrical capacitance in a circuit, a RAM bit in a computer, or a synaptic strength of a neuron) of the system at the present time, then no knowledge about past inputs can improve our ability to predict how the system will behave in the future. The present state encapsulates the past history of the system.

2. If the system in question interacts with some part of the world, then outputs from the system become inputs to that part of the world, and inputs to the system become outputs from that part of the world. In the engineering scene, the part of the world involved is usually quite distinct from the rest of the world and is referred to as the "plant." The system is designed to control the plant. Questions of optimality and stability are paramount. The outputs from the plant must be chosen so that the states of the plant are "observable," whereas the inputs to the plant must make the plant "controllable." Various goals, tasks or optimal conditions can be specified. The system's state variables can be discrete or continuous in time and magnitude.

3. A common variety of control system includes a model of the plant in its structure (model reference control system, or MRCS). The model is usually adaptive to the extent that parameters are varied to keep the model "representing" the plant as accurately as possible. In other words, the model must behave like the plant, or the behaviour of the model must correlate with the behaviour of the plant. If the structure and function of the plant are well understood, the model may be designed to have states in 1:1 correspondence with those of the plant. In that case, one can say that the states of the model correspond to (or represent) the states of the plant exactly. This is the ideal case, but it will do for my argument.

4. Having described a situation (MRCS + plant), which can be defined with considerable mathematical precision and detail, I can now make contact with the Markman & Dietrich (1998; M&D) target article. The existence of "representation" in the MRCS is obvious because the model inside the MRCS represents the plant outside the MRCS. Not all the state variables of the control system take part in this representation process, but there is no doubt that the state variables of the model do. Treating the model as a subsystem of the MRCS, we can talk about the states of the model representing states of the plant. Presumably this justifies our calling the states of the model "mediating states." Other state variables in the control system may be seen to be involved in the maintenance of the validity of the representation of the model, but, since they do not refer to (i.e., are not about) anything particular in the world, they are not responsible for mediating states. In this "designed system" situation, we look from outside and representation seems to come before the mediating state. We have to know what is being represented before we can say which states are mediating states.

5. Of course, control systems are of little interest to cognitive scientists and my introduction of the MRCS was mainly to establish that I understood what M&D mean by "mediating state" in a situation which can be crisply defined. There are several ways we can relax the assumptions made in control systems engineering. In my work (Andreae, 1998), the control system becomes a robot moving about in the world. (To be honest, experiments have had to be limited to modest simulations or trivial real robots.) The robot uses associative learning to explore its world and to set its own goals. Its top-level executive "program" is the changing set of associations, which it accumulates from a tabula rasa start, and the networks of probabilistic transitions between them. The system is called PURR-PUSS, or just PP. A range of experiments have been described in which PP is put into different microworlds with different robot bodies. The details are not important here.

6. Because PP is not programmed to do "cognitive processing" in the way that many AI systems are (see Wagman, 1998, for descriptions of a good collection), and because I have no way yet of analysing large networks of associations, I normally find out what it knows by observing its behaviour -- in the same way that one can tell that a dog knows where it has hidden a bone by observing it go to that place to dig it up. Only in the simplest and most restricted of experiments can I analyse what is going on in PP's "brain." One such experiment (given in detail in chapter 6, Andreae 1998) reconstructs a paradigmatic experiment of Elizabeth Bates's (1979) with PP replacing the child subject. In the original experiments, a child is seated in front of a table on which toys can be placed either within reach or out-of-reach. The caregiver is placed to one side. After the age of about 9 months, a child in this situation communicates its desire for the caregiver to move an out-of- reach toy closer by (i) alternating eye contact, (ii) "signals" contingent on the caregiver's behaviour, and (iii) "ritualized movements." In the PP experiments, I have observed behaviour similar to that reported for the child and have identified the parallel networks of associations responsible for the behaviour. The plans made by PP to reach its self-selected goals also strengthen the case for accepting that there is some minimal "communication of intentions" by PP.

7. PP is a parallel processing system with several associations "active" at any time. These are the associations which match the events in short-term memory (STM) (as in a production system). These internal events in STM are stimuli or actions, or predictions of stimuli or actions, or even the occurrence of other associations. Coming together in an association, a group of these internal events provides a context for an associated internal event. An important role of the associations is to introduce context into PP's stored experience. When describing the internal events of associations for our own use, we give the source of each event in the outside world]; for example, the foveal eye stimulus will be what the fovea of PP's eye is registering. PP has no information about that. PP knows only about the temporal relationships between its internal events because its stimuli and actions are arbitrary signals and PP knows nothing of their origins. PP can, however, learn through its associations that after some pattern of internal events (i.e., in a certain context of stimuli and actions), a particular action will consistently result in particular stimuli being received. PP learns consistency of behaviour in its environment. PP can use these consistent contexts (CCs) to reach its novelty goals, to obtain a reward or to avoid pain.

8. Contexts can exhibit consistency over a number of occasions only if they correspond to the outside world being in the same or equivalent states. PP's information about these world states is restricted to the CCs with which it identifies them. And remember, PP's stimuli and actions are arbitrary, distinguishable signals with no reference or representation (for PP) in the outside world. These world states, identified by PP through CCs are, I presume, the mediating states for which M&D argue. Significantly, these mediating states come before representation. There is no outside designer, as there was for the MRCS, to require representation and then provide a model with mediating states. Indeed, in the case of PP the route from mediating states to representation is still to be established.

9. While accumulating CCs through interaction with its world, PP is also building up networks of transitions which describe how it moves between these CCs: transition networks of mediating states. This is like the model of the MRCS, but there is no designer saying what the states represent in the outside world. There are, however, other entities in the world: humans and possibly other robots. One day in the distant future, PP may be developed to the point where it can establish common conventions about such world states with humans and other robots. That will be the beginning of shared meaning and language. Somewhere along that route we will be able to discern the beginnings of representation.

10. As a result of looking for M&D's "mediating states" in my PP system, I have found CCs which appear to correspond to what M&D have in mind. PP will succeed in establishing mediating states only if its stimuli are rich enough to make states in its environment "observable" in the control-system sense. Its actions must make the environment as "controllable" as necessary for it to achieve its goals. In general, the mediating states will be probabilistic, because the CCs will exhibit varying degrees of consistency. Nevertheless, in discussing autonomous learning systems, whether human or robotic, which acquire information about their environments by interaction, the concept of "mediating state" is a firm stepping-stone on the way to the less well understood concept of representation.

11. Let's speculate on the path from the first mediating states to representation. Take the case of two PP robots in a small world. If each robot can see some of its own body and hear its own "voice," then its mediating states will separate into those of its own body, those of the other robot's body, and those of the rest of the world. It may well be that the robots' stimuli have to be structured so as to aid the learning of spatial relations (e.g., my proposal for "painted vision," Andreae 1998). Smith (1996) has exposed the difficulties of acquiring an understanding of objects and the self-other relationship in his brilliant book "On the Origin of Objects." The incipient awareness of objects "in the world outside" is surely the beginning of representation. Sounds are stimuli which are mediating states in their own sound space as well as being linked to particular spatial directions and thence to other objects. Later, when sounds are clustered into words, there will be the abstract mediating states of a language space for our robots to acquire. Extending all the way from the simple naming of objects to subtle mathematical concepts, this last part of the journey will fully deserve the term "representational".

12. Having found M&D's concept of "mediating state" valuable for my own work, I regret having to end with a quibble about their use of the word "information" in their defining condition 12 (iii). It seems to me that Shannon's (1948) information theory captures exactly the "selection between possibilities" that a robot's sensors and effectors implement. This is the information that isolates the mediating states by distinguishing contexts and it should be used in the definition. Mediating states are identified without a flow of information content to or from the system. Information content comes later, with representational content; it should, therefore, be excluded from the definition.

REFERENCES

Andreae, J.H. (1998) Associative Learning for a Robot Intelligence. London: Imperial College Press.

Bates, E. (1979) The Emergence of Symbols. New York: Academic Press.

Markman, A.B. & Dietrich, E. (1998) In Defense of Representation as Mediation. PSYCOLOQUY 9 (48) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/psyc.98.9.48.representation-mediation.1.markman

Shannon, C.E. (1948) The Mathematical Theory of Communication. Reprinted in Shannon & Weaver (1949) The Mathematical Theory of Communication. Urbana: University of Illinois Press.

Smith, B.C. (1996) On the Origin of Objects. Cambridge MA: The MIT Press.

Wagman, M. (1998) The Ultimate Objectives of Artificial Intelligence. Westport, Conn.: Praeger.


Volume: 9 (next, prev) Issue: 59 (next, prev) Article: 5 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: