Peter M. Milner (2000) Autonomous Brains and Autonomous Robots. Psycoloquy: 11(052) Autonomous Brain (4)

Volume: 11 (next, prev) Issue: 052 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(052): Autonomous Brains and Autonomous Robots

Reply to Herrmann on Milner on Autonomous-Brain

Peter M. Milner
Psychology Department
McGill University
1205 Dr. Penfield Ave.
Montreal, QC.


Herrmann approaches "The Autonomous Brain" from the viewpoint of a robotics expert. I was interested and enlightened to discover parallels between the "New Robotics" and some of the developments in behavioral neuroscience noted in the book. The two fields have much to learn from each other.


association of ideas, attention, behaviour model, intention, motivation, self, serial order
1. Many modern robots are considered to be autonomous, which Herrmann (2000) defines as the property of producing output not controlled by input. I had no formal definition in mind when I chose the title of my book (Milner 199a,b). I wanted to include the word brain to indicate that the book was about the nervous system, but almost all appropriate adjectives for the word had already been taken. I was therefore delighted when "autonomous" popped into my head, probably an association with Hebb's (1949) "autonomous central process". It seemed to fit my theme of a brain that exerts a degree of control over its sensory input. If I had given much thought to the philosophical implications of an autonomous brain I might have concluded that it was not entirely chance that the name was still available! The robots Herrmann describes are autonomous in that, if not stimulated for some time, an internal control initiates movement and they set forth in search of a more exciting environment

2. Herrmann's paragraph 5 appears to refer to the behavior model presented in the book, but it misses the important point that actions are usually initiated, not by the external environment, but by what Herrmann calls "internal desire." This need state is associated, either innately or by learning, with a variety of response plans that are likely to be effective in satisfying it, or that have satisfied it in the past. The aroused plans then sample incoming environmental stimuli for a releasing trigger -- the usual default plan if no such trigger is found being to explore.

3. The reason I favour this sequence of events (and this should be of interest to robot builders) is that the normal environmental state is very complicated. It is much easier to determine whether any part of an input matches a template (preselected by need) than to make a complete analysis of the input and then decide which of the resulting list of objects to respond to. If the matching input is an innate reward, the response plan is amplified until it is executed. If it is a stimulus previously associated with a reward, it will have acquired some of the response facilitating power of rewards; thus it amplifies the response plan with which it is associated.

4. In paragraph 6 Herrmann returns to the question of autonomy, stating that "prediction is clearly the key to autonomy". I do not understand this. I can think of predictors that are not autonomous, and the animal or robot that wanders aimlessly, but autonomously, is not predicting anything. The foundations of autonomy are built into animals, as into some robots. A spider's web is the beautiful result of unlearned autonomous behavior. The outpourings of fine music by members of the Bach family and other great composers display autonomy qualified by experience and environmental influences. Evolution is the real key to autonomy, as Herrmann implies elsewhere in his review.

5. I disagree with Herrmann's comment that, with one exception, I do not elaborate on the importance of prediction and planning. These processes are basic to most if not all the learning situations that are described in Chapter 9, though they may not have been mentioned explicitly. For example, in passive avoidance I postulated that the predicted (by association) unpleasant consequence of a planned response is the firing of a basal ganglia inhibitory system that blocks the response. In instrumental reward learning the response plan whose predicted outcome has the strongest associations with the response amplifying path of the basal ganglia (which is innately activated by rewards) is the one most likely to be executed. During extinction of a conditioned response the nonappearance of an expected reward results in rebound firing of the aversion path in the basal ganglia, resulting in the prediction of an aversive state that eventually suppresses the response plan. In short, the expectancy learning paradigm that I have espoused assumes that almost every learned performance involves prediction of the consequence of a response plan.

6. I am grateful to Herrmann for introducing me to the work of Brooks (1999) and his robotic creations. Although the robots built so far are simpler than most invertebrates, the principles by which they operate seem similar to those I attributed to hypothetical non-learning automatons in chapter 2. I hope that before long more advanced robots will vindicate some of the suggestions I make to explain learned behavior.

7. Paragraph 11 of Herrmann's review starts "As indicated by Milner, stimulus equivalence remains a basic problem" indicating to me that my ability to communicate is deplorable. In fact, I believe just the opposite, that -- in principle, at least -- stimulus equivalence is no longer a problem. Unfortunately, the information supporting that view is scattered through several chapters and as there are a number of complications I was wrong to expect the reader to reassemble it. Let me try to provide some links. Stimulation of different sets of visual receptors by a similar pattern can give rise to the same response. The most likely explanation is that after multiple levels of convergence the activity funnels to a high level collection of neurons that are uniquely fired by the shape of the retinal pattern, but are indifferent to its location, orientation and size. Details of cortical circuits capable of such transformations were published many years ago (Milner, 1974), but at that time the psychological community was not receptive to the idea of genetically determined cortical microstructure (as required by the theory) and instead continued its futile search for a learned solution.

8. I considered this to be a solution to the problem that Lashley (1942), Hebb (1949), and Pribram (1971) tried to solve. It is a moderately straightforward extension of the theory advanced by Hubel and Wiesel (1959) to explain their recordings from single cortical cells. A program based on this principle for recognizing simple shapes at any size, orientation or position in the visual field may be downloaded from Unfortunately, this is only the first step toward achieving stimulus equivalence. If several patterns are present in the field their inputs converge onto a Sun Apr 30set of central neurons different from the one that does any one of them alone. It is, of course, most unusual for only one object at a time to be present in the visual field.

9. To rescue the idea, a way must be found to segregate the significant part of the input. I suggested that continuous outlines may produce synchronous firing of the neurons they stimulate, out of synchrony with neurons fired by the outlines of other objects in the field. Thus, objects would be processed separately by the stimulus equivalence circuits, though the rapidly fluctuating sequence of objects represented by the central neurons might be confusing. Clearly, to allow useful control of responses, one object must be selected by attention, and the others suppressed.

10. It is impossible to understand perception without taking account of attention. Not only is attention necessary for segregating significant sensory input from clutter, it is also required to resolve another problem introduced by stimulus equivalence. Animals need to know not only what a predator (or prey) looks like, they need to know where it is. Neurons that are consistently fired by a certain object (I use engram as a shorthand for such neurons) do so no matter where the image of the object impinges on the retina, thus they provide no positional information. By 1974 it was known that the reverse path through the visual cortex was comparable in extent to the forward path. It seemed to be the obvious channel through which the neurons representing the target of an action could trace the source of their sensory input and thus locate the target.

11. It is not unreasonable to suppose that, being innate, stimulus equivalence paths do not vary greatly from one individual to another. Central neurons fired by objects that have been important for our survival as we evolved (faces, animals, plants, etc) could have developed innate connections with appropriate action plans. This cannot apply to most of the things in our present environment. The neurons fired by books and thermometers must acquire connections to and from response plans by experience. Though it is important to recognize that the neurons in question do not necessarily depend on experience; they may depend only their connections with response plans.

12. Another serious problem with stimulus equivalence is that equivalence is not a fixed property, it depends on the task. My brain may classify all these ceramic disks as plates, but some are white and some blue. If my task is to set the table with blue plates, the distinction becomes important. A stimulus equivalence mechanism that did not allow such distinctions would be disastrous. The mechanism I propose allows the task to determine classifications. If I am rapped over the knuckles whenever I select a white plate, neurons fired by white plates (but not all those fired by blue plates) become associated with an aversion system that inhibits response plans, so perhaps after a period of refusing to touch plates of any color, I may select only blue plates. In a different context I may be requested to use only white plates, so the task instructions determine what features of the stimulus I attend to.

13. Herrmann is right to point out that in theory dogs could be bred with an inherited ability to recognize food dishes, though most of the traits of domesticated dogs are already present to some extent in wild canines. My parable of the dishes and the dogs was intended only to illustrate the value of generalization and the ability to overcome counter productive generalization by learning to discriminate, and to show how the model of perception that I propose accommodates these.

14. Herrmann asks whether there should be any advantage in having innate templates for behaviors rather than perceptions. My answer would be that we need both, but the ability to learn new combinations of the templates confers a great advantage. Humans can innately produce a few dozen distinct vocal sounds but can combine them (serially) to communicate an infinite amount of information. The visual cortex has templates for lines, which can be combined (in parallel) to represent in infinite variety of shapes. Some combinations both of movements and sensory input are inherited templates also, but the ability to learn new combinations is much more powerful.

15. According to Herrmann, artificial networks cannot yet serve as realistic models of temporal processing in animals. This is certainly due in large measure to the failure of the artificial synapse to keep pace with what is now known about the mechanisms of the biological synapse. The advantage of constructing imaginary models using physiological concepts is that one is not bound by rigid learning rules and may even go beyond clearly established biological mechanisms, as Hebb did when he postulated the properties of his learning synapse. I can see no reason, other than technical complexity, why robots and "neural" networks should not incorporate more adventurous connectivities.

16. Herrmann points out that there are gaps in my coverage of brain research; for example, I do not discuss hippocampal place cells. The role of the hippocampus in navigation and spatial learning is well established and important but the book was not meant to review the whole field of behavioral neuroscience. My primary goal in writing it was to draw attention to the fact that motivation and intention play an essential role in the cortical processing of sensory information. I tried to choose simple examples of learned and unlearned behavior to illustrate this theme. As I pointed out in the chapters on memory, I regard the hippocampus as similar in many respects to the neocortex, but with less enduring storage ability. I assume that this property allows animals to acquire and maintain up-to-date maps of their environment and to use them for navigation. The process is not likely to be simple, however.

17. I hope this reply to Herrmann's review may have persuaded him that although my book reflects the past of some aspects of neuroscience, it may also make some contribution to the future of robotics and artificial intelligence.


Brooks, R.A. (1999) Cambrian Intelligence. Cambridge, MA: MIT Press.

Hebb, D.O. (1949) The organization of behavior. New York: Wiley

Herrmann, M. (2000) Autonomous brains and autonomous robots. PSYCOLOQUY 11(036) psyc.00.11.036.autonomous-brain.2.herrmann

Hubel, D.H. & Wiesel, T.N. 1959. Receptive fields of single neurones in the cat's striate cortex. Journal of Physiology, 148, 574-591.

Lashley, K.S. 1942. The problem of cerebral organization in vision. In: H. Kluver (Ed.), Visual Mechanisms. Biological Symposia, vol. 7. Lancaster, Pa.: Cattell Press. Pp.301-322.

Milner, P.M. 1974. A model for visual shape recognition. Psychological Review, 81, 521-535.

Milner, P.M. (1999a) The Autonomous Brain. Erlbaum, Mahwah NJ

Milner, P.M. (1999b) Precis of "The Autonomous Brain" PSYCOLOQUY 10(071) psyc.99.10.071.autonomous-brain.1.milner

Pribram, K.H. 1971. Languages of the brain. Monterey, CA: Brooks/Cole. Pp. 1-432.

Volume: 11 (next, prev) Issue: 052 (next, prev) Article: 4 (next prev first) Alternate versions: ASCII Summary