This commentary supports Bridgeman's general thesis. I promote the use of robot design and simulation in psychological argument, and show that there are good engineering reasons for trying to work out what is needed to build a robot that is aware of what it is doing. Acceptance of Bridgeman's thesis that consciousness is the operation of a plan-executing mechanism should enable one to get down to the design of appropriate mechanisms for making plans, storing them, executing them and monitoring them.
1.1 Bridgeman (1992) has given a sensible account of the origin of consciousness. My interest in consciousness relates to the design of a conscious robot (Andreae, 1987, 1988) and, in particular, to the development of the learning system PURR-PUSS (PP for short; Andreae, 1977; Andreae & MacDonald, 1991; MMSR, 1972-91) with a full range of intentionality (Searle, 1983). Acceptance of Bridgeman's thesis that consciousness is the operation of a plan-executing mechanism should enable one to get down to the design of appropriate mechanisms for making plans, storing them, executing them and monitoring them. But is it quite that simple?
1.2 The advantage of tackling consciousness from the robot design viewpoint is that we cannot sidestep difficulties nor let little homunculi creep into our systems. Also, there are good practical reasons for wanting a robot to "know what it is doing", i.e. to be "conscious of what it is doing." For example, on the ocean floor and in space, where delay-free, broadband communication with robots is impossible, they will have to be allowed to act on their own because we will not be able to share their sensory experiences in real time and we will not be able to advise them of suitable actions quickly enough in emergencies. The unpredictability of such environments will preclude the use of programmed automata unable to learn, plan and make judgements. These robots will have to convince us that they can be entrusted with expensive equipment and important jobs.
2.1 Bridgeman's description of the planning mechanism (para 2.7 of target article) suggests that a plan is a distinct data structure, constructed and stored in a special place where it can be monitored. The plan then becomes a new part of the world in which thought can act. If the plan needs to be monitored, then this might involve another level of planning (planning our monitoring) and the results of this level will have to be put somewhere else. Bridgeman welcomes the recursiveness of planning (para 3.7), but plans cannot be allowed to pile up indefinitely. I suggest that not only is it "meaningless to look for a box labelled `consciousness'" (para 2.9), but there is probably no box labelled `plan' either. In the PP system, experience of the world accumulates in a network of contexts or situations (actually a collection of networks, but this is not important for the current discussion and would take too long to explain). The network is PP's model of the world and plans are made by predicting a path through the network and leaving "footprints." PP can then carry out its plan if the real world behaves like the plan. Any monitoring would amount to the same prediction process following the footprints and deleting them or adding new ones. To avoid an infinite regress of planning or monitoring levels, everything must be kept to one level.
2.2 When Bridgeman talks about the significance of language in consciousness, he is more convincing than Jaynes (1976), but he does seem to allow a homunculus to take over in this statement (para 3.7): "Not the least of these [advantages] is that one also hears one's own speech, so that the plan-monitoring mechanism has immediate access to the plan-executing mechanism's products." What is this plan-monitoring mechanism? As soon as it is identified as a separate processor, it becomes the homunculus that must know everything about everything, or there is another escalation of levels. Like Bridgeman, I too am attracted to Vygotskii's (1962) inner speech. In addition, I am impressed by the way language enables us to think at different levels without changing level: In one sentence, I can ask you for something, comment on why I asked you, and comment on my comment. It is significant that in our thinking speech continues as a single stream, even though we may change from talking to someone else, to planning an action, and then to commenting to ourselves on what we have just done. The way speech can thread through real and imagined behaviours like acting and planning has been shown in a little experiment (Andreae and MacDonald, 1991) in which PP first learns to count with real world aids and then applies that ability to a counting task in a changed world.
2.3 To guide the overall development of the PP learning system, we have been trying to establish conditions for a robot to exhibit consciousness. The idea has been to define a class of robot and then refute its adequacy for consciousness so that a better definition can be formulated. The definitions are formulated in the spirit of Dennett's (1987) intentional stance. These robots are given the name "yammy," rather than "conscious," to emphasise that they are only candidates for consciousness. The series of definitions has reached "yammy-5."
3.1 Bridgeman's definition of a plan as a scheme that can control a sequence of actions to achieve a goal is too restricted. Plans must be able to include the actions and intentions of others. If a plan is a path through a model of the world then the model of the world must be capable of representing the activities of other entities. The definitions of yammy robots are behavioural but not behaviourist, as is illustrated by the definition of yammy-1 robots:
3.2 Definition. A robot will be described as yammy-1 when its behaviour
is most efficiently predicted in terms of its own intentions (= plans) and its knowledge (= information) of the intentions of other purposeful (= having plans) entities.
3.3 Although this definition excludes hill-climbers, model-reference adaptive controllers and simple goal-seeking automata by including the intentions of others, it lacks a key feature of knowing. (There isn't space here to describe examples of yammy-1 robots or any of the supporting arguments for the definition and its refutation.) The yammy-2 definition rectified this omission:
3.4 Definition. A robot will be described as yammy-2 when its behaviour
is most efficiently predicted in terms of its intentions, its knowledge of the intentions of other purposeful entities, and its knowledge of their knowledge of its intentions.
3.5 An elaborate simulation of a society of yammy-2 robots was created by Wilson, Bruce and Won (1984), but watching it provided little evidence for the nonconsciousness of the robots. This should not have surprised us, because we treat the characters in a film as alive and conscious even when they are only cartoon characters.
3.6 The yammy-4 definition went further and, in addition to insisting that the robot be irreversible and that it exist in an open environment, it added the robot's ability to predict its own and other entities' future intentions. The robot that knows what it is doing should know what it would do if a cooperating robot had to change plans. This requirement would make considerable demands on Bridgeman's plan-making mechanism.
4.1 If a conscious robot is going to convince us that it is reliable and properly motivated, it will need to be able to answer questions about its behaviour. At first sight, an explanatory expert system will suffice because it can explain to a user what it has been doing and what it would do in various situations. However, I know of no expert system that can explain its behaviour to itself.
4.2 Bridgeman links language to the plan-executing module (para 3.2). In the following yammy-5 definition, language is seen as an essential part of consciousness, the talking-to-ourselves consciousness (Calvin, 1987), with which we are so familiar.
4.3 Definition. A robot will be described as yammy-5 when its behaviour
is most efficiently predicted in terms of the answers it gives to its own questions about its own intentions.
4.4 Again, I must apologize for omitting the arguments, tasks, models and facts leading to the definition. We have not yet succeeded in designing a robot that satisfies the yammy-5 definition, so the opportunity hasn't arisen for refuting the consciousness of such a robot. We are currently working with PP in a situation in which it can learn to communicate intentions like a 9 month old infant. However, contrary to Bridgeman's claim (para 3.9), we find that self- consciousness is not necessary at the simplest level of communicating an intention. The next challenge is to set up an equally simple situation in which PP can "talk" about its own intentions.
4.5 Bridgeman's target article should help to bring consciousness down to earth. The aims of this commentary have been (a) to support Bridgeman's general thesis, (b) to promote the use of robot design and simulation in psychological argument, and (c) to show that there are good engineering reasons for trying to work out what is needed to build a robot that is aware of what it is doing.
Andreae, J.H. (1977) Thinking with the Teachable Machine. London: Academic Press.
Andreae, J.H. (1987) Design of a Conscious Robot. Metascience 5: 41-54.
Andreae, J.H. (1988) The Conscious Robot Argument. MMSR (1972-91): UC-DSE/30 5-24.
Andreae, J.H. & MacDonald, B.A. (1991) Expert Control for a Robot Body. Kybernetes 20(4): 28-54.
Bridgeman, B. (1992). On the Evolution of Consciousness and Language. PSYCOLOQUY 3(15) consciousness.1
Calvin, W.H. (1987) The Brain as a Darwin Machine. Nature 330 (6143): 33-34.
Dennett, D.C. (1987) The Intentional Stance. Cambridge: M.I.T. Press.
Jaynes, J. (1976) The Origin of Consciousness in the Breakdown of the Bicameral Mind. Boston: Houghton Mifflin.
MMSR (1972-91) Man-Machine Studies Reports nos. UC-DSE/1-41. Ed. J.H.Andreae. Dept of Electrical & Electronic Engineering, University of Canterbury, Christchurch, N.Z. ISSN 0110 1188. Available free from author while stocks last, or from University of Canterbury Library, or from N.T.I.S., 5295 Port Royal Road, Springfield, Va. 22161 U.S.A.
Searle, J. (1983) Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University Press.
Vygotskii, L.S. (1962) Thought and Language. Ed. and translated by E.Hanfmann abd G. Vakar. Cambridge: M.I.T. Press.
Wilson, J.C., Bruce, A.G. & Won, M.C. (1984) Presenting a Competitive Society of Energy-seeking Robots. MMSR (1972-91): UC-DSE/23 5-15.