J. Michael Herrmann (2000) Autonomous Brains and Autonomous Robots. Psycoloquy: 11(036) Autonomous Brain (2)

Volume: 11 (next, prev) Issue: 036 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(036): Autonomous Brains and Autonomous Robots

AUTONOMOUS BRAINS AND AUTONOMOUS ROBOTS
Review of Milner on Autonomous-Brain

J. Michael Herrmann
Max-Planck-Institut für Strömungsforschung
Bunsenstr. 10, 37073
Göttingen, Germany
http://www.chaos.gwdg.de

michael@chaos.gwdg.de

Abstract

Milner's "The Autonomous Brain" presents ample evidence in support of the behavioral side of the debate on information processing vs. active perception in the brain. The importance of the information processing aspect is not diminished by the emphasis on behavioral autonomy. The study of autonomous robots (i.e., the embodiment of autonomous brains) is essential for relating brain research to underlying philosophical questions, for finding a convincing formulation of an effective working hypothesis and for testing theories of brain function.

Keywords

association of ideas, attention, behaviour model, intention, motivation, self, serial order
1. Scientists use their brains preferentially to search for intelligible structure in the world. Animals on the other hand, and many non-scientist humans, are less concerned about understanding puzzling combinations of sensory stimuli. Instead, they usually just behave the way they are inclined to. Milner's (1999a, 1999b) book lists evidence in favor of a neuropsychological theory that accounts scientifically for the latter seemingly more natural aspect of brain function.

2. The distinction between stimulus-based and behavior-based functions underlying Milner's approach can be seen as a question of causation. Do stimuli cause the observable behavior or are stimuli selected by representations of behavior which are in turn activated by intentional subsystems? We can easily think of examples that support either scheme, although it appears that only comparatively simple behaviors can be caused directly by stimuli. The emergence of behavioral complexity requires larger loops, including feedback and the involvement of different types of stimuli. We will consider here parallel developments in autonomous robotics and brain theory. First, the notion of autonomy.

3. Autonomy can be formally defined as a property of a dynamical system that produces outputs without being controlled by input. In this sense, an autonomous brain would incorporate an ongoing autonomous dynamics acting in an open-loop fashion as a control of behavioral output or of input-output relations. Pacemaker cells or central pattern generators may obey this scheme. With more complex models of brain function this understanding of autonomy becomes too schematic. No area in the mammalian brain can be identified that is not being targeted by excitatory or modulatory input. Various types of proprioceptive, cross-modal, and hormonal inputs as well as multiple levels of control affect the activity of most areas in addition to the basal reward systems. It makes sense to consider the latter as autonomous relative to the more specific cortical pathways.

4. In autonomous robotics one can avoid having to provide a precise definition by referring to the observable autonomy of animals in their respective environments. An autonomous robot is thus intended to be an artificial animal rather than merely a clever machine. Conversely, properties that have turned out to be important in the control of autonomous robots may help in constructing models of animal brains (cf. e.g. Webb 1995). Irrespective of the present capabilities of real robots and the exorbitant hardware problems (in contrast to the mediation by the highly optimized animal body), it is clear that the environment frames the autonomy of the creature by providing a link between actions and sensations.

5. The choice of an action likely to lead to a reward must to be based on a predictive process. Prediction in turn requires some representation of the environment. Say that environmental state A is followed by state B1 if action a1 is performed and by B2 if a2 is performed. If sometimes B1 is more desirable and sometimes B2, then the appropriate action must be selected given A, based on internal desire state D1 or D2. This is done so that percept A together with D1 can be considered a predictive representation of B1 and the activation of a1 a prediction of B1.

6. The importance of prediction and planning is noted by Milner (p. 16), but is not elaborated further. In a notable exception (p. 106), hyperactivity resulting from the inability to make predictions is contrasted with the decay of activity evoked by predictable stimuli. Prediction is clearly the key to autonomy. Consider a continuously working predictor. If it is nontrivial it will look like an autonomous system in that it produces a sequence of states which are not directly caused by its inputs. On the other hand, prediction is clearly caused by the input, but by a representation of the causality of the environment, such that, since this representation is internal to the predictor, the predictor is indeed in a certain sense autonomous. It is beyond the scope of this review to give more than an idea of how prediction is the key to autonomy. Implicitly (i.e., as a low-level internal reward scheme), this idea is fundamental to Milner's approach.

7. We can consider the combination (A * D1) as an environmental state that causes the behavior. This means that A is perceived differently in the presence of D1 or we are left with the distinction between internal (to D1) and external sensation (to A) which (at least to a single neuron in the cortex) is not given a priori. On the other hand, the boundary between the external and internal can also extend outside the body, for example, if behavior is influenced by active modifications of the environmental state relative to the animal (e.g., by leaving traces or just by choosing a certain position or view). The distinction between inside and outside a brain is arbitrary and depends on what the distinction is aimed at. In this sense, autonomy is a multiple level property applying to the whole brain only in a metaphorical sense (cf. Jarvilehto 1998).

8. Before the mid-80s, autonomous robots were designed according to the sense-plan-act (SPA) paradigm. Based on "innate" knowledge about the environment (i.e. using a computer program), the robot tried to interpret sensory inputs with respect to an internal model; after this was completed by its current data, it could perform action sequences. The fact that (at least in natural environments) world models have a strong tendency to be incorrect, plus the huge processing costs involved, severly limits any "intelligent" behavior in these models. Following the approach of Braitenberg (1984) and Lieblich and Arbib (1982), the SPA-paradigm was challenged in 1985 by Brooks (cf. Brooks, 1999), who proposed a "subsumption architecture." This was less a departure from than a transformation of the SPA scheme. It consists of a set of elementary behaviors which are triggered by external stimuli. Stimuli that require an immediate response such as an obstacle perceived by touch sensors cause appropriate behaviors, whereas more complex behaviors such as exploration are performed only if no simpler behavior is necessary.

9. Milner distinguishes two ways in which environmental states enter the brain. There are innate pathways that indicate the presence of important features or objects. These may cause reflexes, but, more interestingly, they influence the processing of other features, which are perceived as a second type of input. Later robot projects (Brooks, 1999) generalized the linear priority scheme of "innate" behaviors with the inclusion of adaptive triggering by inputs and flexible associations among behavioral units. A more general framework for such control structures was provided by the dynamical systems approach (Steels, 1994). Here the order among different behaviors was assigned randomly. The natural consequence was that some of the robots simply failed to produce interesting behavior and that a selection process had to be included in the paradigm.

10. Milner's frequent references to innate capabilities require natural selection to be part of brain theory in order to avoid shifting relevant problems across the fields. Currently, however, this would be a too broad a framework for brain research. The question of how to identify different classes of stimuli for fixed low-level evaluation and flexible high-level modulation of behaviors is of current theoretical importance if only for the design of robots (Maes, 1990).

11. As indicated by Milner (pp. 40-56), stimulus equivalence remains a basic problem. In robotics, where poor sensor qualities are predominant, the opposite problem, namely, perceptual aliasing, is important. The two problems refer to degeneracies of the map between sensory space and behavioral space as well as the inverse map; these are already present with nonmodifiable maps. In order to resolve stimulus equivalence, stimuli can be combined into classes whenever a unique behavior is required. On the other hand, stimulus ambiguity is resolved by separating such classes (e.g., using appropriate internal parameters). This "solution" is questionable, because there is no rigorous algorithm for artificial problems let alone real-world situations. To find out whether a certain behavior fits a stimulus requires identifying the stimulus. On the other hand, to note that a stimulus is ambiguous requires an internal structure that is sufficiently flexible to represent the behaviorally distinct instances of the wrongly created stimulus class.

12. Under special assumptions, stimulus classification and disambiguation is possible. For example, as indicated by Milner, simple stimulus-response loops are innate and are extendable by associative learning to a more general structure in the three spaces of stimuli, representation and behaviors. How such basic loops are to be set up is far from obvious, however, and requires a lot of trial-and-error construction in robotics (or a prior evolutionary process).

13. Consider Milner's passage on generalization (p. 52). "The object that sufficiently resembles a food dish actually may contain food." Surely the concept of a food dish is relevant and understandable (to dogs) when food is predictably in there. However, the generalized dish-ness of an object that is otherwise unrelated to food cannot be defined behaviorally unless one specifies "objective" measures such as feature similarity or the probability of finding food in a certain substitute food dish. These information processing abilities (or some guiding principles for the development of such abilities) must accordingly be innate. The argument that food dish recognition cannot be genetically inherited by dogs can be easily (though not really seriously) refuted by citing the coevolution of dogs and food dishes in human history. More seriously, food dishes are defined as objects containing food, i.e. a network of associations including smells, times-of-day, feeding, as well as visual percepts. Sufficient resemblance is clearly achieved by the smell of food, but can be inhibited by a faint difference in visual appearance -- unless a well developed processing scheme for visually defined objects is present. It would be very interesting to learn whether such schemes are inherited or can develop from visual experience.

14. Objects are accordingly defined as things exhibiting a certain behavior; and structure in the perceptual space is likewise measured in terms of behavior. Should there be any advantage in having innate templates for behaviors rather than perceptual templates? Inexperienced sensory and motor units can each form structures from spontaneous activity and feedback loops via sensory feedback projections and proprioceptive inputs, respectively. One is again led to conclude that only the causal relation to the environment allows the distinction between sensation and action; both can be subsumed under "behavior". Similarly, attention (to sensory inputs) becomes indistinguishable from "inside" in selecting an action. Such a convergence would help lead us to a more comprehensive dynamical systems theory. The classical distinction between action and sensation can only be retrieved when taking into account that the creature is situated in an environment which is in part formed by the body.

15. We have omitted here the temporal dimension. This has turned out to be particularly hard to study with artificial neural networks, which can hardly serve as realistic models for temporal processing in animals. Learning algorithms such as backpropagation in time or temporally asymmetric Hebbian learning rules have not yet achieved the performance and reliability of learning rules for stationary patterns.

16. A few of the important findings in brain research are not considered by Milner, for example, work on place representations in hippocampal cells (Muller, 1996), which somehow emerge from an interaction of exploratory behavior and sensory input; modulatory effects of cerebellar activity on complex movement trajectories are also unmentioned. Here many details are known, but general functional and developmental principles are still a matter of intense debate.

17. Milner's book nicely summarizes an aspect of brain research that has been studied during the past five decades but has not received the interest it should have expected following Hebb's (1949) book. The decades to come should be less concerned with proving or disproving the rigid positions in the debates of the 20th century, instead the details of brain research will be elaborated within a more general framework given by the understanding of creatures -- natural or artificial -- as autonomous beings. From Milner's book it is clear that the main steps in this direction are still to come. The book reflects the situation of the past decades, showing that very promising approaches are available but buried underneath a large number of not always unambiguously interpretable facts.

REFERENCES

Braitenberg, V. (1984) Vehicles: Experiments in synthetic psychology. Cambridge, MA: MIT Press.

Brooks, R. (1999) Cambrian Intelligence. Cambridge, MA: MIT Press.

Hebb, D. O. (1949) The organization of behavior. Wiley & Son, New York.

Jarvilehto, T. (1998) Efferent Influences on Receptors in Knowledge Formation. PSYCOLOQUY 9(41) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1998.volume.9/ psyc.98.9.41.efference-knowledge.1.jarvilehto http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?9.41

Lieblich, I. & Arbib, M. A. (1982) Multiple representation of space underlying behavior. The Behavioral and Brain Sciences 5, 627-659

Maes, P. (ed.) (1990) Designing autonomous agents. Theory and practice from biology to engineering and back. MIT Press, Cambridge, Mass.

Milner, P.M. (1999a) The Autonomous Brain. Erlbaum, Mahwah NJ

Milner, P.M. (1999b) Precis of "The Autonomous Brain" PSYCOLOQUY 10(71) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/1999.volume.10/ psyc.99.10.071.autonomous-brain.1.milner http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?10.071

Muller, R. (1996) A quarter century of place cells. Neuron 17, 813-822.

Steels, L. (1994) Building agents out of autonomous behavior systems. In: R. Brooks, L. Steels (eds.): Artificial life route to artificial intelligence: Building Embodied, Situated Agents. Chapter 3, pages 83-119. Lawrence Erlbaum.

Webb, B. (1995) Using Robots to Model Animals: A Cricket Test. Robotics and Autonomous Systems 16:2-4, 117-132.


Volume: 11 (next, prev) Issue: 036 (next, prev) Article: 2 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: