Neil W. Rickert (1995) A Computer is not an Automaton. Psycoloquy: 6(11) Robot Consciousness (6)

Volume: 6 (next, prev) Issue: 11 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary
PSYCOLOQUY (ISSN 1055-0143) is sponsored by the American Psychological Association (APA).
Psycoloquy 6(11): A Computer is not an Automaton

Book Review of Bringsjord on Robot-Consciousness

Neil W. Rickert
Department of Computer Science
Northern Illinois University
DeKalb, IL 60115


Bringsjord has tightened up many of the arguments against computer intelligence. In arguing against AI consciousness, his main emphasis has been on showing that a person is not an automaton. I will argue that a computer is not an automaton either, and that therefore his book does not settle the question of computer consciousness.


behaviorism, Chinese Room Argument, cognition, consciousness, finite automata, free will, functionalism, introspection, mind, story generation, Turing machines, Turing Test.


1. In his book What Robots Can and Can't Be (1992, 1994), Selmer Bringsjord has made an effort to strengthen the case against AI. As Bringsjord sees it, the failure to persuade is due to the manner in which the arguments have been presented. He says of Searle's presentation, "the premises in the standard form version of the Chinese Room Argument are themselves conclusions of murky sub-arguments given only in normal prose" (page 184). Bringsjord takes many of the best known arguments, and lays them out with care and precision. Many of the standard arguments are here: Searle's Chinese room; Jackson's argument about qualia; the argument based on Goedel's theorem; arguments on arbitrary realization of computation; and even an argument based on free will. In order to present these arguments in a suitable form, Bringsjord has often reformulated them so that they bear little outward resemblance to the original on which they were based. He has retained the logical structure and the philosophical points of the arguments, but changed the setting to one more amenable to his style of crisp precision.


2. Bringsjord distinguishes various levels of success for AI. The ROB problem is the problem of constructing a robot which passes strong versions of the Turing test (or TT). It is clear to Bringsjord that the ROB problem will be solved. In his chapter 4, he supports this position by describing progress toward a story writing computer. Many AI proponents would count a TT capable robot as a complete success. But beyond that, Bringsjord discusses PBP, the Person Building Project.

3. What is a person? Bringsjord's answer (pages 82-85) mentions such properties as bearing psychological states. Regrettably, he does not define his terminology on personhood with the precision he presents elsewhere. We are left wondering how Bringsjord would determine whether a robot were capable of fearing unicorns.

4. In order to help us understand the distinction he is making, Bringsjord presents a number of theses, including the ROB and the PBP problems. The basic outline of Bringsjord's argument is:

  (1) A person is not an automaton.
  (2) PBP implies that a person is an automaton.
  (3) Therefore PBP is false.

Bringsjord takes (2) to be self evident, and thus a major aim of his book is to demonstrate (1). I am inclined to agree with (1), but I reject (2). Playing monopoly with play money is not the same as playing the stock market with real money, and testbed computing with simulated data is not the same as using a computer to run a network of Automatic Teller Machines. A computer coping with real-time data and real-world problems cannot be understood purely on the basis of the Turing machine formalism. A computer is not an automaton. For comparison I have shown several of Bringsjord's theses in the table below, giving both his view and my own view.


                                       Bringsjord        Rickert

A person is an automaton: NO NO

A computer is an automaton: YES NO

A TT capable robot will be constructed: YES maybe

TT capability implies Personhood: NO YES

It is logically possible to build a person: NO YES ----------------------------------------------------------------

5. Roughly speaking, an automaton is a computer which has been hermetically sealed so as to prevent any contact with reality during its operation. This appears to be consistent with Bringsjord's usage of "automaton". He argues against the theoretical importance of incorporating sensors and effectors in the hardware (pages 78-79). More precisely, I will consider an automaton to be any system whose behavior is fully captured by the Turing machine formalism. It is my contention that computers, as we commonly use them, fail to meet this criterion.


6. The Goedel argument has been used by Lucas (1961), Penrose (1989) and others. In his version of the argument, Bringsjord compares a logician Ralf, and a Turing machine, MRalf. The idea is that, according to Goedel's theorem, there is some proposition that MRalf cannot prove. But Ralf claims to be able to prove this proposition.

7. In order to rebut the Goedel argument, all we need do is find a way of programming a computer, so that its capabilities are as good as those of Ralf. In order to do this, we find a world class mathematician and logician Helen, with abilities at least the equal of Ralf. Then we connect our computer MRalf to a network, so that it simply acts as a copying system, echoing whatever Helen enters on her terminal. Then with this program, MRalf can do whatever Helen can do, and this will easily match the abilities of Ralf.

8. Admittedly, my rebuttal sounds like a crude form of cheating. But it clearly demonstrates that the connection of sensors and effectors to the computer is not harmless, but greatly alters the outcome. A computer is not an automaton.

9. More generally, if MRalf is equipped with suitable sensors and effectors, then it can visit the mathematics library, read the latest research, and incorporate some of the newly discovered mathematics into its repertoire of capabilities. Ralf does just this, and his knowledge of the Goedel result is most likely a result of such a practice. What we permit for Ralf, we must also permit for MRalf. But every time robot MRalf adds to its repertoire, the corresponding Goedel proposition changes, and that which MRalf previously could not demonstrate might now be provable.


10. In chapter 6, Bringsjord presents the case of 4 billion Norwegians, all working on railroad cars in the state of Texas, using chalks and erasers to keep track of the information. The idea is that this should be an implementation of a Turing machine which has the same information flow as a robot. Then, the argument goes, if the robot can be a person, so must the collection of Norwegians, boxcars, chalk and erasers. But, Bringsjord argues, it is implausible that the group of Norwegians is a person. The argument hinges on this purported implausibility of group intelligence. For evidence that AI proponents are not the only people who consider group intelligence a distinct possibility, see Seeley (1989).

11. There is indeed something implausible about the railroad scenario. Hilary Putnam (1988), in his argument against both functionalism and mentalism, rightly emphasized the importance of the relation between an agent and the agent's world. If this applies to human agents, it must also apply to robotic agents with human-like performance. It is not sufficient to have the correct internal information flows of the Turing machine. It is also necessary that the Input/Output mapping be correct, for that creates the relationship between the robot and the world. But it seems to me quite implausible that you could achieve the correct I/O mapping with this railroad realization. The arbitrary realization argument fails because it ignores the I/O mapping in its supposition that a computer is an automaton.


12. Bringsjord presents his version of the Jackson (1982) argument on pages 25-39. He imagines that Alvin is a cognitive engineer, and asks whether he could know so much that he could predict his own subjective response to meeting a long lost friend. Bringsjord's basic assumption, as I have reworded it, is:

  (A) If functionalism is true, there is a set s of purely
      computational statements, such that if Alvin knows all of these
      statements then he will understand all of his psychological

But Bringsjord's argument also requires an unstated assumption:

  (B) Alvin's brain capacity is such that he is capable of knowing all
      of the set s.

These assumptions are used to reach the implausible conclusion

  (C) Alvin is not surprised when he unexpectedly meets a long lost

13. Since (C) is implausible, Bringsjord concludes that functionalism is false. But actually, it is (B) which is implausible, and in that case we cannot conclude (C). A reasonable choice for s might be the information about possible activation states for all possible sets of neurons in Alvin's brain, and this makes (B) highly implausible.

14. Although Bringsjord's "Alvin" argument appears to be flawed, I am inclined to agree with its conclusion that Turing machine functionalism is false. But a computer is not an automaton, so this does not argue against PBP. Premise (A) assumes functionalism by specifying that s should be a set of purely computational statements. But in a robot, the computer would not be used only for internal computations, but for relations with the real world, and purely computational statements are insufficient to specify those relations.


Bringsjord, S. (1992). What Robots Can and Can't Be. Dordrecht, The Netherlands: Kluwer Academic Publishers.

Bringsjord, S. (1994). Precis of: What Robots Can and Can't Be. PSYCOLOQUY 5(59) robot-consciousness.1.bringsjord.

Jackson, F. (1982). Epiphenomenal Qualia. The Philosophical Quarterly, 32: 127-136.

Lucas, J. R. (1961). Minds, Machines and Goedel. Philosophy, 36: 120-124.

Penrose R. (1989). The Emperor's New Mind. Oxford University Press.

Putnam, H. (1988). Representation and Reality. The MIT Press.

Seeley, T. D. (1989). The Honey Bee Colony as a Superorganism. "American Scientist," 77(6): 546-553.

Volume: 6 (next, prev) Issue: 11 (next, prev) Article: 6 (next prev first) Alternate versions: ASCII Summary