John H. Andreae (2000) The Human Brain is no Program. Psycoloquy: 11(096) Ai Cognitive Science (23)

Volume: 11 (next, prev) Issue: 096 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article:
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 11(096): The Human Brain is no Program

THE HUMAN BRAIN IS NO PROGRAM
Commentary on Green on AI-Cognitive Science

John H. Andreae
Department of Electrical & Electronic Engineering
University of Canterbury
Christchurch, New Zealand

andreae@elec.canterbury.ac.nz

Abstract

Green's concerns about the psychological usefulness of AI stem from assuming that the top-level executives of human and robot brains are computer programs. The feasibility of top-level executives not being computer programs is demonstrated. It is concluded that the right AI could be a field for productive cooperation between engineers and cognitive scientists.

Keywords

artificial intelligence, behaviorism, cognitive science, computationalism, Fodor, functionalism, Searle, Turing Machine, Turing Test.
1. Green's (2000) concerns about the psychological usefulness of AI stem from assuming that the top-level executives of human and robot brains are computer programs.

2. Arguments like Searle's Chinese Room and the Lucas-Penrose application of Goedel's theorem confirm our instinctive dismissal of the idea that the "executive" of the brain is a formal computer program. We are not puppets pre-programmed by a designer. Nor is our ability to learn limited to the variation of parameters in an adaptive system.

3. Good Old Fashioned AI (GOFAI) belittled the importance of learning and treated it as secondary to the main programs. It was even argued that the increasing evidence for innate resources and processes somehow diminished the role of learning. Putnam (1968) dismissed that argument many years ago: "Invoking 'Innateness' only postpones the problem of learning; it does not solve it. I am suggesting that learning occupies the driving seat.

4. Outside GOFAI, there has been a continuing call for computer embodiment and interaction with an open world through a robot body (e.g. Pask (1979); Varela et al, 1991), but this is not feasible unless the top-level executive of the robot brain can interact closely with the world. A top-level computer program cannot do that while maintaining its integrity.

5. Andreae (1977, 1998) has shown how associative learning can generate a top-level executive in the form of a constantly evolving collection of multi-dimensional associations. Through these associations the embodied robot becomes its experience and gains its individuality. By treating new associations as "novelty" goals, the executive of associations is enabled to "set its own goals", a condition for genuine creativity. This learned top-level in no way diminishes the system's need for the innate facilities, processes and modules available to the human. This is unlikely to be the only way to produce a suitable top-level executive, but it is an existence proof. Eventually, there should be better methods using neural networks.

6. It is important to see that the top-level collection of learned associations, while not being a formal computer program, is still computational. Indeed, by connecting it to a closed world and switching off the novelty goals, the system can be taught to emulate a Universal Turing Machine. However, while the system is interacting with the open world, its executive top-level is not describable in terms of Turing Machines or mapping functions. Of course, low-level processes and modules may well be equivalent to programs carrying out well-defined functions.

7. This brings me to Green's main concern, which is with the relevance of AI to cognitive science. In pursuing the goal of designing a human-like robot, my aim has been to understand how the human brain might work. My debt to philosophy and psychology is considerable, but could my project be of interest to the people in those disciplines? One facility that a robot offers the psychologist is access to its innermost mental states. However, as I show in detail in my book, this access to inner detail does not in itself explain what is happening. The same, of course, is true in neuroscience. If we knew every connection in the brain and could record every neuron firing, we would still not have an explanation of how it works. What does seem certain is that progress with developing a human-like robot will depend crucially on the concepts we develop for describing what is happening in the robot's brain and our ability to relate them to the human case. The discovery of those concepts would be cognitive science.

8. Finally, there is the question of the Turing Test. If one day someone owns a robot that works on its own on the ocean floor or in deep space, where communication is severely limited, it will be important that the robot, on returning to its owner, can (a) answer questions about what it has done, (b) explain what it plans to do, and (c) raise its own questions about relevant matters. This kind of behavioural response would be a kind of Turing Test and it is likely to satisfy the hypothetical owner with regard to the robot's intelligence and competence. However, a robot that learns from its experience and has no top-level program to determine its behaviour is going to be a genuine individual with its own goals and motivation. Internal emotional states are difficult enough to infer with humans, so the hypothetical robot will have to have expressions of emotion that we humans can recognize. Doubtless it will learn to fake and deceive as we do. The more human-like the robot, the more human-like will be the methods used by a human owner for inferring the robot's inner states. The situation is quite different for the robot designer. Even if the robot brain is made from hardware instead of wetware, the robot designer will need tests like the PET and MRI scans that neuroscientists and psychologists now use for observing the internal structures and processes of the human brain (Damasio, 1999). With a silicon brain, it should be easy to provide such tests and they would help to confirm (or deny) the reality of the concepts conceived by the cognitive scientist.

9. In summary, I have taken Green's excellent, but pessimistic, target article and have shown how the right AI could be a field for productive cooperation between engineers and cognitive scientists.

REFERENCES

Andreae, J.H. (1977) Thinking with the Teachable Machine. Academic Press.

Andreae, J.H.(1998) Associative Learning for a Robot Intelligence. Imperial College Press.

Damasio, A. (1999) The Feeling of What Happens. William Heinemann.

Green, C.D. (2000) Is AI the Right Method for Cognitive Science? PSYCOLOQUY 11(061) ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/2000.volume.11/ psyc.00.11.061.ai-cognitive-science.1.green http://www.cogsci.soton.ac.uk/cgi/psyc/newpsy?11.061

Pask, G. (1979) Review of Thinking with the Teachable Machine. Simulation/Games for Learning 9(1) 39-40.

Putnam, H.(1968) The 'Innateness Hypothesis' and Explanatory Models in Linguistics. Reprinted in J.R.Searle (ed.) The Philosophy of Language. Oxford University Press. (1971)

Varela, F.J., Thompson, E. & Rosch, E. (1991) The Embodied Mind. MIT Press.


Volume: 11 (next, prev) Issue: 096 (next, prev) Article: 23 (next prev first) Alternate versions: ASCII Summary
Topic:
Article: